Further Proposals

In this section, I outline ideas and proposals for multimodal and speech interaction in the car and elsewhere that have not yet reached the prototype stage, but are well documented and published.

I’d be happy to discuss use cases and details anytime - maybe there is even more that might be of interest for you!

Lorem Ipsum

Learning by Example - teaching you car your preferred longitudinal and lateral controls

It is a current trend to ‚learn‘ preferred user settings. However, this can take a long time and sometimes results in kind of paternalistic systems that prescribe certain things rather than request the actual wants and desires of a user. My patent disclosure on the driver teaching the car her or his preferred speed and steering angle for a particular stretch of road is just one possible realization of a paradigm where a user tells a system how she or he wants certain things done, and perhaps even which things should be done. In this case, this is done by example - the current controls (steering wheel, brake and gas pedal) are treated as multimodal input devices as well. Such an interaction can be started by the driver („Let me show you how I want this to be done)“ or by the system („This is the way I do it - is this ok with you?“) and verified in a dialog as well.

IStock 184706905

Communication with the Outside - how can a vehicle interact with external people

People tend to postpone the question of how cars can interact with people in their environment to the point where autonomous vehicles absolutely need to do that. This patent disclosure elaborates on different ways in which even now the car can communicate with the outside, using light, sound, body or body parts movement as well as speech output. The application example is a straightforward, output only, reminder function: Please bring wiper fluid from the gas station!

However, it is also possible to have speech commands from the outside to the car, e.g. to open doors, switch on lights etc. More details can be found in another patent disclosure.

IStock 1136364759

Monitoring acoustic Interaction - verify correct spoken dialogues

Some ideas are ahead of their time. To use speech and acoustics monitoring of interacting humans in order to increase a system’s situation awareness may only become of real interest now. Over and above the cognitive workload level assessment, as we have shown some time ago, given today’s speech recognition performance one can now also map the contents of what is currently being spoken by whom onto world models, and thus verify that the human-human interaction is in accordance with this. E.g., the crew of an SAR helicopter on a mission may have to perform certain tasks following a protocol, air and ground traffic controllers and their counterparts are obliged to only use certain expressions etc. The monitoring technology allows to track who when says what to whom, and - if need be - raise a flag and remind or ask that the prescribed forms (as laid down in the world model) be honored. There are many possibilities to explore!

Solutions
New Ideas Car Center Console
Who I Am Car Center Console

What else can I help you with?

I'm happy to answer questions! Please contact me!


Heisterkamp Consulting

Max-Johann-Str. 11
89155 Erbach
Germany

E-mail: Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein!
Phone: +49 (0) 176 1052 4719