Futures Salon: “Conversations with Alexa”

We meet for our second digital lunch-hour format here on June 9th to discuss the future and implications of a voice command driven world. Dr. David Staley welcomes the group and begins moderating our discussion.

We open with the question: “Will Alexa or similar vocal processing and response technology have the same impact as the smart phone?”

Perhaps, but they have a different role in integration. While their functionality is limited, they do not currently act as stakeholders in our social lives. While smart phones became ubiquitous quickly, adaptation will be necessary to adjust to this “economy of automation.” This technology might be more one of an innovation rather than one of impact. It acts as another tool to access the library, but the smart phone is the library itself.

Will there be enough time for this voice assistant to become overshadowed by technology that bypasses this verbal requirement? Are tools which either through contact or through thought going to innovate quicker?

Thinking about how this tool is implemented, if you were to ask “who won the game last night?” what is the response? If we were to use a search engine, a list appears, where we can then sort the data. Voice recognition software must then determine which particular piece of information you were looking for. Then what happens to this “conversational web” strategy? What if a certain sports website was where your game news was coming from? Will they then be attempting to organize and highlight their knowledge to be what is reported back from these vocal commands?

Constructing knowledge ontologies to rival web browsers may be too daunting a task and less matched to the market which these voice recognition tools are developing towards. Instead of typing out a large transcript, these tools leverage enhanced vocal detection to start to become another, if not the primary way in which we interact with computers. Will the rate at which we speak become dominant to write novels? Texts? Research documents?

Another interesting thought is what do we actually consider to be dictation? Are tools which send voice recording more efficient then having to type or translate speech? Our culture is currently must more visual than the writers of antiquity who would use scribes to bring the words to page.

As AI becomes more sophisticated, will our quick “do this” or “answer me this” statements evolve into actual conversations? Will we be able to have deeper thinking questions with these tools? Will AI be able to future? Bounce ideas? Puzzle through questions? Become a member of the team?

Virtual assistants have recently shown the ability to set hair appointments, anticipate issues, create work around. Right now we see these tools and slicing and dicing information, allowing certain exceptions, but not being able to think independently and generate new knowledge.

To think about education, we teach students to develop libraries of knowledge, report that information, all within a rigorous testing system. Is this not what Alexa is doing? We often think intellectual power is in the ability to answer questions. But truly, this emphasis should be able to ask questions and critically think. How do you evaluate the source of the answers? What if these voice recognition tools preference a certain source of information? What are the implications of this preference and will we be able to still question the source of these answers?

Voice is an important identifier, as well. Companies are responding to the current utilization of facial recognition. Speech-to-text software becomes better and better at understanding language, and with that, improvements of the ability to detect and identify individuals. Will voice become a new method for bio-security and identification? Speaking in public may then become data which may play a role in a surveillance state, where voice can become uniquely linked to your person and your data.

What is the actual value that this tool brings? Is the voice recognition tool responsibility to solve problems only when your hands are occupied? “Verbalization is cacophony.” Are we preparing for an “eTelepathy” device where this is the more crude integration system. Will we be able to cross the vocal uncanny valley?

A few pieces of reading were shared if you are interesting reading or watching more about these topics:

Google Duplex: A.I. Assistant Calls Local Businesses To Make Appointments

Tracking people by their ‘gait signature’

How AI could become an extension of your mind

We appreciate everyone who participated and look forward to futuring with you next month!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: