Monday, December 18, 2017
This post originally appeared on InfoWorld on December 1, 2017.
Are you caught up with the real-time (r)evolution?
Real-time analytics have been around since well before computers. Here's where the evolution of real time is headed in 2018
Thursday, November 16, 2017
This post originally appeared on InfoWorld on November 1, 2017.
4 sources of latency and how to avoid them
Even Google and Amazon can’t process data instantly—here’s how to combat latency in your real-time application
Despite all the advances we’ve seen in data processing and database technology, there is no escaping data’s Public Enemy No. 1: latency, the time delay before a response is generated and returned. Even Gartner’s definition of a zero-latency enterprise acknowledges that latency can never actually be zero because computers need time to “think.”
While you may never truly achieve zero latency, the goal is always to deliver information in the shortest amount of time possible, so ensuring predictable, low latency processing is key when building a real-time application. Often the hardest part, though, is identifying the sources of latency in your application and subsequently eliminating them. If you can’t remove them entirely, there are steps you can take to reduce or manage their consequences.
Before, during, and after computing the response, there are number of areas that can add unwanted latency. Below are some common sources and tips for minimizing their impact.
Most applications use the network in some manner, whether between the client application and the server or between server-side processes and applications. The important thing to know here is that distance matters—the closer your client is to the server, the lower the network latency. For instance, round-trip latency between nodes within the same datacenter can cost 500 microseconds, while it can be an additional 50 milliseconds for nodes in California and New York.
What to do:
- Use faster networking, such as better network interface cards and drivers and 10GigE networking.
- Eliminate network hops. In clustered queuing and storage systems, data can be horizontally scaled across many host machines, which can help you avoid extra network round-trip connections.
- Keep client and server processes close together, ideally within the same datacenter and on the same physical network switch.
- If your application is running in the cloud, keep all processing in one availability zone.
Many real-time applications are data intensive, requiring some sort of database to service the real-time request. Databases, even in-memory databases, make data durable by storing it to persistent storage, but for high-velocity real-time applications, making data durable can add significant unwanted latency, and disk I/O, like network I/O, is costly. Accessing memory can be upwards of 10,000x faster than a single disk seek.
What to do:
- Avoid writing to disk. Instead use write-through caches or in-memory databases or grids (modern in-memory data stores are optimized for low latency and high read/write performance).
- If you do need to write to disk, combine writes where possible, especially when using fsync. The goal is to optimize algorithms to minimize the impact of disk I/O. Consider asynchronous durability as a way to avoid stalling main line processing with blocking I/O.
- Use fast storage systems, such as SSDs or spinning disks with battery-backed caches.
The operating environment
The operating environment in which you run your real-time application—on shared hardware, in containers, in virtual machines, or in the cloud—can significantly impact latency.
What to do:
- Run your application on dedicated hardware so other applications can’t inadvertently consume system resources and impact your application’s performance.
- Be wary of virtualization—even on your own dedicated hardware, hypervisors impose a layer of code between your application and the operating system. When configured properly, performance degradation can be minimized, but your application is still running in a shared environment and may be impacted by other applications on the physical hardware.
- Understand the nuances of the programming language environment used by your application. Some languages, such as Java and Go, use automatic memory management and use periodic garbage collection to reclaim unused memory. The processing impact of garbage collection can be unpredictable and introduce unwanted latency at seemingly random times.
When it comes to coding, there are some common core functionalities that can pose barriers to speed.
What to do:
- Inefficient algorithms are the most obvious sources of latency in code. When possible, look for unnecessary loops or nested expensive operations in code—restructuring loops and caching expensive computation results usually help.
- Multi-threaded locks stall processing and thus introduce latency. Use design patterns that avoid locking, especially when writing server-side applications.
- Blocking operations cause long wait times, so use an asynchronous (non blocking) programming model to better utilize hardware resources, such as network and disk I/O.
- Unbounded queues may sound counter-intuitive, but these lead to unbounded hardware resource usage, which no computer has. Limiting the queue depths and providing back pressure typically lead to less wait time in your code resulting in more predictable latencies.
Combating the enemy
Building real-time applications requires that the application developer not only write efficient code, but to also understand the operating environment and hardware constraints of the systems on which your application will be deployed. Provisioning the fastest networking equipment and the fastest CPUs won’t singularly solve your real-time latency requirements.
Thoughtful application architecture, efficient software algorithms and optimal hardware operating environment are all key considerations for fighting latency.
Tuesday, October 17, 2017
This post originally appeared on InfoWorld on September 27, 2017.
What real-time application pattern works for you?
3 common real-time application patterns that require a real-time decision
At first glance, building a real-time application may sound like a daunting proposition, one that involves technical challenges as well as a significant financial investment, especially when you have an application goal of responding within a fraction of a second. But advances in hardware, networking, and software—both commercial as well as open source—make building real-time applications today very achievable. So what do these real-time applications look like?
This article presents three common real-time application patterns that require a real-time decision, meaning a response returned or transaction executed based on real-time input. To determine which pattern to apply to your application, you must first define your real-time objective. Ask yourself: How fast does the application need to respond?
Each application pattern addresses a particular level of real-time response: sub-millisecond, milliseconds, or 100 milliseconds and greater.
Pattern 1: Embedded applications—delivering responses in sub-milliseconds
To achieve sub-millisecond response, you need to eliminate any server-side networking and embed your application onto a computer or hardware appliance. This is the bleeding edge of real-time processing for more specialized applications that are not very common. This pattern is relevant for areas such as high frequency trading applications, nuclear power plant systems and signal processing and sensor applications.
Delivering sub-millisecond responses involves low-level programming, often at the kernel level. Standard kernels, operating systems, and device drivers can add unwanted processing overhead resulting in extra latency. Applications that care about every microsecond or nanosecond, every clock cycle, should seek to eliminate this overhead and code directly on the hardware. Alternatively, if you can withstand some additional latency, you can forgo writing low-level code and build and run your application directly on the operating system, embedding a data store such as SQLite, if needed.
Pattern 2: High speed OLTP—delivering responses in milliseconds
This is the classic client-server OLTP application architecture where a client application talks to a server-side application and database. These applications are very common — you have likely interacted with them several times today without even realizing it. These applications detect credit card fraud, compute personalized webpages, and deliver optimized digital ads. For instance, when you use your iPhone or Android phone to make a call, run an app, or access the internet, several decisions (transactions) must be made by the telco provider before your action is allowed to occur: Is your account valid? Do you have enough quota (voice or data)? What policy should apply to the action (throttling etc.)? And each transaction must respond in milliseconds.
Optimizing network performance between the client application and the server allows for low-latency responses for high-speed OLTP application patterns. Low-cost gigabit ethernet (GigE) and relatively low-cost 10GigE networking is readily available to most application developers. Network performance can be further optimized by keeping the application on the same network switch or rack as the server or minimally on the same LAN. In other words, keep the client and server in close proximity. Within the server, the application and database usually minimize blocking disk I/O, either by avoiding it completely, by applying sequential I/O, or by using advanced storage such as SSDs or the newly emerging non-volatile RAM.
One additional point worth noting is that with next generation in-memory data stores and caches, it is even possible to achieve low single-digit millisecond latency with highly available clustered data stores; that is, databases and systems spanning multiple nodes or processes. Today, many shared-nothing, in-memory databases, data grids, and NoSQL stores offer highly available data stores with predictable low latency (often single-digit millisecond) response times.
Pattern 3: Streaming fast data pipelines—delivering responses in seconds
A fast data pipeline, historically rooted in complex event processing (CEP) applications, is becoming a more broadly deployed real-time application pattern today. In this application pattern, a never-ending stream of immutable events is being ingested with real-time analytics applied.
Typical applications have a queuing or streaming system that delivers events, ultimately feeding the data lake, managed by Hadoop, Spark, or a data warehouse. Before arriving at the historical archive, the event stream is processed by a fast data store or computational engine. It is the role of this engine to aggregate, dedupe, and compute real-time analytics on incoming events and generate real-time alerts or decisions as required. The analytics are often displayed on a dashboard, and alerts or decisions are generated. A person or business process reacts to the alert, in human speed. A few seconds is often enough time to ensure any late data has arrived to inform the decision.
In this pattern, data flows in one direction. This real-time engine often holds a predetermined amount of “hot data,” either in the form of continuously computed analytics or a database of the last hour, day, or week’s worth of data. Older data is delivered to the historic data lake or data warehouse.
Advances in queuing systems like Kafa, in-memory databases, data grids, and NoSQL data stores make implementing this pattern possible. This pattern has broad usage across the internet of things (IoT), electric smart grids, log file management, and mobile in-game analytic processing, among others. We’ll be seeing more of this pattern in future applications.
The age of real time is now
If you are just starting out with your real-time application, first consider what response rate your problem domain requires. If it requires sub-millisecond response, consider an embedded application. If your application is high-velocity OLTP, explore high-performance network configurations and new offerings in low-latency data store and in-memory database technology. If you need to handle relentless streams of data, consider a fast data-pipeline architecture.
Low-cost computing, readily accessible high-speed networking, and numerous open source and commercial data storage software offerings capable of low-latency data processing means that real-time applications are no longer out of reach.
Thursday, September 14, 2017
The term “real-time” is thrown around a lot these days, but it’s a buzzword that is often surrounded by ambiguity. Every day, it seems a new product is announcing its real-time capability. But how is real-time measured? It certainly isn’t measured in days (or even hours)—so is it measured in:
- All of the above?
Everyone, from developers to software corporate marketing departments to even consumers, seems to have a slightly different answer. So let’s explore the question “What does ‘real-time’ really mean?”
Let’s begin with the dictionary definition:
Real-time—“of or relating to applications in which the computer must respond as rapidly as required by the user or necessitated by the process being controlled.”
While this definition continues with the subjective theme, it does confirm that the correct answer to how to measure real-time is “All of the above.” The meaning of the term real-time varies based on application need—the amount of time a computer (the application) takes to respond and the acceptable latency is as fast as required by the problem domain.
Rather than look at applications and determine if they are real-time or not, let’s examine various time units and understand the types of real-time applications that require those response rates:
Nanoseconds: A nanosecond (ns) is one billionth of a second. Admiral Grace Hopper famously explained a nanosecond using an 11.8-inch wire, as that is the maximum distance electricity can travel in one nanosecond. This quick video of Hopper is worth watching if you haven’t yet seen it.
With this in mind, it is easy to see why nanoseconds are the unit used to measure the speed of hardware, such as the time it takes to access computer memory. Worrying about nanosecond latency is at the bleeding edge of real-time computing and is primarily driven by innovation with hardware and networking technology.
Microseconds: A microsecond (µs) is one millionth of a second. Real-time applications that worry about microsecond latency are high-frequency trading (HFT) applications. Financial trading firms spend large sums of money investing in the latest networking and computer hardware to eliminate microseconds of latency within their trading platforms. A trading decision has to be made in as few microseconds as possible in order to execute ahead of competition and thus maximize profit.
Milliseconds: A millisecond (ms) is one one-thousandth of a second. To put this in context, the speed of a human eye blink is 100 to 400 milliseconds, or between a 10th and half of a second. Network performance is often measured in milliseconds. Real-time applications that worry about latency in milliseconds include telecom applications, digital ad networks, and self-driving cars. The decision on what optimal ad to display or whether there is enough balance to let a cellphone call proceed must be made on the order of 100 milliseconds.
Seconds: We’re starting slow down here. We’re still in the realm of real-time, but we are now venturing into near real-time. Sub-minute processing time is often more than good enough for applications that process log files, computing analytics on event streams, as well as alerting applications. These real-time applications drive actions and decisions that are made in human-reaction time rather than machine-time. Reducing the response time by one tenth of a second (100ms), which may be costly to implement, has no change in value for the application.
Minutes: Waiting minutes may seem like an eternity to a high-frequency trading application. However, consider package shipment and delivery alerts or ecommerce stock availability notifications. Those applications certainly feel real-time to me—the fact that I receive a “delivery notification” text message within 10 minutes of a delivery made to my home is very satisfying.
Finally, though I discounted it up front, let’s briefly consider hours and days. While this time range is generally not regarded as true real-time, if you’ve been getting finance or sales reports on a monthly, weekly, or daily basis, and now you can get up-to-date reports every hour, that may be as real-time as you need. The modernization of these applications is often termed as upgrading from “batch” to “real-time.”
The old proverb is correct: Time is money. Throughout history, the ability to make real-time decisions has meant the difference between life and death, between profit and loss. The value of time has never been higher and therefore speed has never been more critical to business applications of all kinds.
Luckily, we live in an age where fast computing is very affordable and making decisions in real-time is economically achievable for most applications. The first step is determining the appropriate definition of real-time that aligns with the needs of your business applications.