For ages, scientists and fiction writers worked on personal assistants. The mechanical machine that walked and talked like a glorified butler was an anthropomorphic. The evolution of digital era moved us away from that original version towards something that is more ephemeral. Now a fancy animation on a screen accompanied by a synthesized voice waits for you to command it to do something you want.
The rapid advancement of machine learning has taken us closer to the disembodied butler which is not at all boring. And recently, the company has planned on making its digital assistant more improved. Apple has been working on a way for users to control Siri in iMessage instead of using their voice for giving commands. This will allow users to use the digital assistant without having to talk to it.
According to Apple, with this method users input can be received. In response to receiving the user’s input, it can be displayed as the first message on GUI. Corresponding to the displayed user, an appropriate state of the electronic device can be stored. This process will then cause an action to be carried out with user’s intent that is derived from the input. A second message will be displayed in the GUI as a response based on the action.
If we put it in simple words, Siri will be able to carry out an action in response to the user’s input in a text conversation and will then respond to the text. This update has been made due to the inability of Siri to listen to commands in noisy environments or even to situations when it’s hard to speak such as a library. This is a good method even for those users with speech disorders or who are hearing-impaired.
You can see the Patent here.
Via: The Next Web