Eclipse SmartHome - Onward!

Eclipse SmartHome is about three years old and has more than 2500 commits authored by more than 100 contributors! It's time for an update on what is currently going on in the project.

What is Eclipse SmartHome?

Eclipse SmartHome is an Internet of Things (IoT) project hosted at the Eclipse Foundation. More precisely, it is framework that allows users to build smart home solutions that have a strong focus on heterogeneous environments, i.e. solutions that deal with the integration of different protocols or standards. Its purpose is to provide a uniform access to devices and information and to facilitate different kinds of interactions with them. This framework consists out of a set of OSGi bundles that can be deployed on an OSGi runtime and which defines Open Service Gateway Initiative (OSGi) services as extension points. The stack is meant to be usable on any kind of system that can run an OSGi stack - be it a multi-core server, a residential gateway or a Raspberry Pi.

While it is possible to run Eclipse SmartHome it on its own, it is actually meant as a foundation for building smart home solutions. One of these solutions is the openHAB project, which is also an open source project and the origin of the Eclipse SmartHome project. You will also find commercial solutions available on the market by now.

The major goal of the project is to interconnect devices from different vendors, which speak different protocols and by default don’t "know" and "understand" each other, therefore building an IoT integration platform. Consequently, the devices are called Things within Eclipse SmartHome. These things expose their functionality as Channels. A channel typically has some state and may also recieve commands. As an example, a smart thermostat would have a channel to set the desired temperature and another one which it uses to publish the current room temperature. And potentially many more.

It’s the task of Bindings to define concrete things, corresponding to a specific device of a certain vendor. They are OSGi bundles, containing all the code and metadata required to integrate a physical device into Eclipse SmartHome. In ThingTypes, they describe what channels such a thing should have, what configuration parameters are needed in order to communicate with it and how it can be discovered automatically. The most important part of a binding though is to provide a ThingHandler, which manages the actual communication with the real device. It sets up the connection, listens or polls for value changes and sends out commands.

With Items, we leave the real-world representation and have an abstraction to it. They are the entities that users interact with. So they may be linked against channels in order to control a device, but might also be virtual, representing a derived state. Items may also be grouped in a different way than on the physical hardware.

What's Next?

Voice Control

As you might have guessed, development is not ceasing. One of the most interesting new features that are coming up are related to voice control. Basically, there are three building blocks that are addressed: speech to text (SST), human laguage interpretation (HLI) and text to speech (TTS).

  • Speech to text is about transforming the sound of voices into written text.
  • Human language interpretation is about somehow making sense of what we said, so sensible actions can be derrived from it. This is what makes the whole thing smart at the end.
  • Text to speech focuses on language synthesis, allowing the smart home system to talk to us with a more or less natural voice.

As always, the Eclipse SmartHome frameworks provides the APIs and some reference implementations, but they can be replaced or extended by the solutions built on top of the framework, integrating their own services or using some more advanced or potentially commercial implementations.

Of course, the voice needs to get in and out of the computer somehow. While using the standard microphone and speaker jacks of the computer where Eclipse SmartHome is running on is nice as a start, you might want to use the gadgets you already have within your house. At least those with built-in microphones or speakers. For those exists an abstraction for AudioSource and AudioSink.

Bindings may implement those interfaces in order to register Things as such, making them available to the voice engine. Currently there is only the JavaSound reference implementation available, but it’s just a matter of time until bindings will get adapted to register their applicable Things as sinks or sources and can be used as such.

Integration of speech to text engines into Eclipse SmartHome is rather simple (in contrast to the voice recognition itself, of course): An implementation of the STTService has to be provided. Apart from some pure information methods, there basically only is one method which has to be implemented where the actual voice recognition then should happen. Once this is done, the given listeners will be informed via an event that there is some text available for further processing.

Depending on the CPU power of the system the SmartHome framework is running on, it might be feasible to integrate some online services here. Alternatively, this building block may be skipped completely in scenarios where other devices can handle the voice regonition already, e.g. when running on a mobile phone. In that case, the resulting text can be passed directly from the mobile device to the HLI layer of the framework.

As of today, there is not reference implementation available yet. So if you like to integrate your favorite STT online service, this would be a good starting point for contribution to the project.

The human language interpreter though is the part where everything starts to get magic. An implementation of the HumanLanguageInterpreter interface would need to somehow analyze the text and derrive appropriate actions from it. In order to make this a little easier, there is a base class for rule based interpreters (AbstractRuleBasedInterpreter), containing useful helper methods. Have a look at the StandardInterpreter example to see how this works.

If you want to try it out, the OSGi console command is the fastest way to getting started with it:

osgi> smarthome voice interpret switch the light on

or since the interpretation is localizable e.g. for our German readers:

osgi> smarthome voice interpret schalte die Wohnzimmerlampe ein

The terms “light” and “Wohnzimmerlampe” in the examples above are item labels, so as you can see you can simply reference the devices in the same way as you named them within the user interfaces.

Feel free to play around with it and create your own interpreter and share it. Help us with our mission to turn connected homes into smart homes!

Last but not least, a good conversation needs two parties involved. Therefore the house should be able to to answer questions or confirm commands. In order to do so, a TTSService must be implemented. It’s basically the opposite of the STTService, i.e. some text gets synthesized into voice audio. Again, there is mainly one method of interest which takes a string and some meta information and has to return an audio stream.

As a reference implementation there is the mactts extension, which makes use of the TTS engine on MacOS. If you are eager to try it out, there is a console command to do so. It has the convenient feature that item states can be referenced directly:

osgi> smarthome voice say it is %OutsideTemperature% degrees celsius outside
osgi> smarthome voice say the front door is %FrontDoor%

Onwards!

As you have seen, there currently is a lot going on within the Eclipse SmartHome project. If you are interested in hearing more about it, visit the talk at EclipseCon Europe: Colonization of Mars - Meet the Eclipse SmartHome powered Mars Rover. We hope to see you in Ludwigsburg!

About the Authors

Sun Tan

Simon Kaufmann
itemis AG