Sensors as Co-Processors in Next-Gen Smartphones

The complexity and quantity of information will require new ways to handle the data

By Ed Brachocki, Kionix & Stephen Maine, Opta3 IIc

Micro electro-mechanical systems (MEMS) sensors, that enable machines to hear, see, touch, feel and smell, are creating opportunities for new consumer products and services that profoundly affect the way we live.

MEMS accelerometers, magnetometers and gyroscopes, for example, already enable smartphones to respond to our hand gestures, rotate displays when we tilt the handsets, tell us which way is north, pinpoint our longitudes and latitudes, count our steps and lead the way to our destinations.

Such competence is the result of seamless integration among the sensor hardware, middleware and smartphone application software. Achieving such seamless integration requires that at least one of these three layers—hardware, middleware or software—has the computational intelligence to interpret data from our surroundings and feed it to the other two layers for a desired result. The million-dollar question is: In which of these layers should the intelligence reside?

Unfortunately for smartphone manufacturers, there is currently no firm answer to that question. Taking a look at a mobile operating system (OS) such as Google’s Android, the most popular smartphone platform, may tell us why.

In the tradition of Linux on which it is based, Android’s mobile operating system is the result of collaboration among approximately 80 hardware, software and telecom member vendors in an open-source handset alliance.

Open source doesn’t mean available
When participating vendors develop a new application, they often are forced to add the computational intelligence to make it work because the technology they need is not available. Case in point: Android has no sensor fusion application solution for magnetometers or accelerometers. Yes, there are placeholders in the Android sensor API for sensor fusion (quaternion, rotation matrix, linear acceleration, gravity), but it is up to sensor vendors such as Kionix to provide the actual algorithm solutions that populate the placeholders. Therefore, if system and application designers want to combine sensory data from disparate sources to make an application more accurate, more complete or more dependable, they need to add that capability themselves. As these efforts are multiplied over and over—and Android is said to have more than 200,000 available apps already—the intended open-source effort ultimately becomes closed to all but a few companies that have the financial resources to create breakthrough technologies on their own.

26

The sheer complexity and quantity of information that sensors can create requires new and different ways to handle the raw data for it to be incorporated in a computational platform and an alternate way of managing and storing it.

It wasn’t always so
Previously, accelerometers simply would detect when a specific acceleration threshold was reached, such as when a laptop computer was dropped. The information flow to the host processor was practically zero. The "yes" indication, confirming that the laptop was dropped, was received by the system controller, which would then notify the hard drive to shut down and park the read-write head. The data-processing needs of the host were minimal and the sensor’s local hardware minimally processed its own data flow.

Later, when accelerometers were employed to notify host applications about the orientation of handheld devices, there were computational requirements for multi-axis motion detection and acceleration forces, as well as tracking of past, current and present positioning. Now there was a need for more dialogue between the host operating system and the sensor, plus communication to the application at the presentation layer.

This is the point at which the complexities of sensors, the operating system and mobile applications became challenging and data-rate intensive, while also requiring the interchange of data between several applications and several—maybe disparate—sensors.

Many of today’s smartphone computational platforms rely on available operating systems such as Android that do not necessarily accommodate the high information-rate streams of sensors.

Android – not up to the task
The Android OS architecture consists of a Linux kernel, including specific hardware drivers, that allows the processor to operate. Sitting on the Linux OS are abstraction and adaptation layers that allow Java applets and programs to run. The adaptation layer operates like a browser running real-time applications. Each app runs at the top layer totally independent and isolated from all other apps available or running. The architecture permits some apps to run concurrently.

In this example, the resource demands on new and future sensors on the underlying host processor could become so significant that it would force all other running apps and processes to freeze (assuming the OS allowed it to hog the bandwidth requested). While the sensors are being serviced, all the other communications and resident running apps also require system resources and servicing.

Using the resident host processor and operating system—to support sensor motion algorithms, for example—may simply overload today’s embedded-processing platforms. Some flavors of Android do not have a Direct X equivalent that allows applications to tunnel through to the base layers and manage the lower layers of the processing stack. Any sensor requesting high demands on processor bandwidth would not be accommodated.

So until Android can build in the appropriate processing algorithms and allocate the necessary resources and device management, any new sensor that has relatively high bandwidth demands requires additional processing power that can only be delivered by additional hardware.

Platform upgrades unlikely
While we are peering into the future, let us suppose that next-generation processors will have the ability to integrate many high-level functions that easily accommodate the requirements of the high data transactions of new sensors. This approach has significant appeal and few apparent down sides.

Is this the answer? Maybe. But smartphone developers who have made significant investments in legacy processors will likely prefer to add sensors to an existing design, along with software and new apps, without reinvesting in a new processing platform. They would use the existing infrastructure and have the sensors serviced by adding hardware and software, and deploy existing protocols and device handlers provided by the operating system.

Insight into how the smartphone industry will solve this problem may be found in the recent history of personal computers. As the PC industry developed during the 1980s, hardware design was simplified to accommodate the intensive processing requirements of printers and modems. Ultimately, an embedded microcontroller or microprocessor was engaged to process data locally to lower the overhead of the host processor. Designers achieved system integration through a set of software drivers that communicated with the hardware abstraction layer of the operating system.

Short term: smarter, more powerful sensors
Engineers at Kionix believe that this traditional approach is inevitable as processing demands of sensor-information streams increase. In other words, it will be up to sensor devices to process the data and provide information to the middleware and the application software in a smartphone.

To see how this might work, take the example of smartphone location-based services. The outdoor typical solution is based on the global positioning system (GPS) using satellites that tell time and find your location on the earth as long as it can calculate the distance from an already-known location. Besides helping you find your way, GPS also provides such perks as geotagging your smartphone photos with the exact place and time of day they were taken. It is the smartphone sensors, however, that provide data to the GPS system so it can give you the local time and tell you where in the world you are.

Since the GPS provides location information only where there is an unobstructed line of sight, it, of course, does not work within a building. Indoors, smartphones must be smart enough to switch to a local-range communications technology such as Bluetooth and WiFi to facilitate the more intimate interaction—e.g., hand gestures—between smartphone users and the smartphone sensors.

Both local and long-range positioning systems need to communicate with sensors such as gyroscopes, magnetometers and accelerometers—sometimes one at a time, sometimes concurrently—to provide users with the most accurate response. Furthermore, the operating system and the application software are required to concurrently exchange and process information from each of the sensor devices and their respective applications.

If we delve into the requirements and individual tasks that are needed to manage all of the above, it becomes evident that this is not only very complex but also will require enormous amounts of bandwidth and processor cycles to execute in real time. Additionally, the physics and mathematics required to convert the many degrees of motion taking place in real time has to be handled in such a manner that the application programmer gets to work with parameters that are simple to understand and manage.

Furthermore, the complexity has to be hidden to ensure that, from the end user’s point of view, the technology just plain works.

No standards exist currently. Maybe the time has come to standardize where in the product-implementation phase the sensor-bandwidth issue is solved. Kionix engineers believe that, in the short term, the processing power should lie in the sensor hardware acting as a co-processor to the microprocessor.

Future cooperation?
Long term, as sensor capabilities and performance level increase, perhaps vendors from all three device layers will be able to work together to see that sensor-processing demand is met.

Admittedly, developing industry standards for sensor behavior and performance would likely be slow going and technology develops too fast to wait. Furthermore, sensor suppliers—Kionix included—have no desire to commoditize solutions, even if standards do emerge. The newness of motion control in mobile applications, its rapid acceptance and performance enhancements, make it extremely difficult to nail down a set of standards without impacting the growth and diversity of potential applications.


28_a

Ed Brachocki is director of marketing at Kionix. Ed was a member of the start-up team for LG’s mobile phone business and was general manager, North America, for Alcatel/TCL mobile phones. He has an extensive background in telecommunications and consumer electronics products.

 

 

28_b

Stephen Maine is owner and president of Opta3 IIc, a management-consulting firm in Phoenix. He holds numerous patents and was recognized by EE Times as one of the 30 people who made the most significant contributions to the development of the integrated circuit industry.