DevTips.NET

Eric Horvitz Receives AAAI Feigenbaum Prize; Shares Reflections On AI Research

dinsdag 27 januari 2015

Eric HorvitzEditor's note: Eric Horvitz, managing director of Microsoft Research's Redmond Lab, shares some reflections upon receiving the AAAI Feigenbaum Prize.

Horvitz is being recognized by the AAAI for "sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems."

How do our minds work? How can our thinking, perceiving, and all of our experiences arise in networks of neurons? I have wondered about answers to these questions for as long as I can remember.  Until just a few decades ago, discussions on mind and brain generally occurred within philosophy and theology.  Over the last century, research in psychology, biology, and computer science has brought into focus intriguing results and directions for approaching a science of intelligence.

We don’t have a clear understanding yet about machinery underlying the human mind.  However, we have been developing an understanding of the computational principles that capture different dimensions of intelligence. I see deeper insights about cognition coming via computer science, particularly through advances in artificial intelligence (AI), and the related fields of cognitive science and computational neuroscience.

It’s an exciting time for AI research. Innovations include advances in core computational “fabrics” such as representations of knowledge, inferential methods for drawing conclusions from that knowledge, and machine learning for acquiring new knowledge and abilities from data and perceptions. We’ve built key “competencies” on top of these fabrics, including machine vision, natural language understanding, speech recognition, and human-computer interaction. And we’ve been composing sets of these competencies into rich, multilayered “symphonies” that bring together multiple competencies to develop new capabilities. One example of a symphony in progress is the Assistant, an automated admin outside my office at Microsoft Research. The system is built on basic fabrics of machine learning, inference, and decision making, and uses a set of components to learn from and reason about multiple streams of sensory input, leveraging competencies in vision, acoustical analysis, speech, and dialogue.

AI research is an enthralling, collaborative endeavor, where insights often arise in a volley of theory and experimentation.  I’ve had incredible colleagues over the years and have enjoyed being on fabulous teams of creative folks. Ideas on new directions often start with fundamental questions about intelligence and evolve to span a mix of theory, experimentation, and applications.  I’ve found that many researchers in the field stay true to their basic curiosities and long-term visions over decades while engaging in a spectrum of projects.  I trace many of my research interests to curiosity about mysteries of the operation of our minds. Initial sparks of curiosity grow into larger flames of mathematical models, working prototypes, and experimental studies.  The efforts sometimes blossom into systems and services that we press into real-world use, such as decision-support systems in healthcare, city-wide inference for traffic-sensitive directions, and caching and prefetching technologies that operate deep within our computing infrastructure.  It’s exciting when early, ill-formed questions and curiosities about the workings of the mind, or about the possibility of creating a new computational capability, mature into research programs, leading to new insights—and sometimes to the construction and fielding of real-world systems that provide value to people.

As examples, I’ve long been interested in methods that could provide automated reasoning systems with the skills of a Sherlock Holmes.  How could we enable the systems to piece together multiple observations and to draw conclusions and take actions under uncertainty? How can we imbue our systems with curiosity and the ability to ask valuable questions—so as to collect the best new information to enhance decisions? Such capabilities, developed through research on probabilistic and decision-theoretic representations and inference, have been harnessed in systems that diagnose outcomes in healthcare, infer intentions behind our search queries, and predict traffic flows so as to find the best routes.

Sometimes we step back and put ourselves in the shoes of a computational system and ask questions about how we might make decisions when faced with limited knowledge and time.  I've pursued principles of computational intelligence wearing such shoes. And the shoes aren't all that different from the shoes that I wear in daily life.  How can a reasoning system with limited reasoning abilities, memory, information, and time make the best decisions under uncertainty? What does it mean for such a system to be rational? What are principles of bounded rationality for a computing system? How might a system reflect about its own  reasoning so as to guide its own thinking? What kind of reflections would allow a system to optimize its reasoning and problem solving, for example, to trade off the accuracy of its thinking with the timeliness of its actions in different settings? What algorithmic procedures would enable such a reasoning system to be successful within real-world environments where streams of problems arrive over time? How could systems situated in such environments over extended periods continue to learn over time—to perform lifelong learning? Might we prove that certain thinking policies are ideal?

Beyond focusing solely on computation, I’ve been very interested in the minds of people, and the similarities and complementarities of the cognition of people and machines. I have viewed challenges in human-computer interaction as bringing to the fore interesting and important opportunities for AI. How might we provide computing systems with deeper knowledge of human cognition, for example of our attention and memory—and leverage the knowledge in human-computer interaction? I’ve been curious about methods that would enable machines to better collaborate with people, so they can serve as valuable companions in problem solving. Our team has pursued opportunities for harnessing the complementary abilities of people and machines.

It has been exciting to see advances in AI become part of everyday life.  On a daily basis, hundreds of millions of people now benefit from innovations in speech recognition, face detection and recognition, automated recommendations, navigation and diagnosis. Machine intelligence has been playing an increasingly central role in multiple scientific endeavors, including with the analysis of large amounts of data in the biosciences.  I believe that AI developments will have an even more profound impact on society over time, with deeper contributions to come in areas such as transportation, health and well-being, education, scientific discovery, and personal empowerment.  

The real-world deployment of AI research is exciting, but with that success come responsibilities for understanding the benefits, risks, and effects of the technologies. Advances in machine intelligence will likely have significant influences on people and society, and the effects will likely touch on issues in the legal, ethical, economic, and psychological realms.  There has been some discussion in the press recently about the dangers of AI with some well-known people expressing anxieties about machine intelligence.  To complement that discussion, I recently co-authored an essay with Tom Dietterich, the current president of the Association for the Advancement of AI (AAAI).

 I am optimistic about advances in AI applications and about the value of developing deeper insights about intelligence.  However, I believe that we’ll need to remain vigilant about assessing and continuing  to address potential risks and rough edges.  One area that comes to mind is privacy.  Machine learning and reasoning can be used to make deep inferences about people by weaving together sets of seemingly innocuous streams of shared data—including data that people share publicly, such as tweets. There’s been great work in enhancing privacy in multiple areas of computer science. I actually see advances in AI as providing some of the answers. AI can provide new approaches to anonymization and privacy and also methods that can help people to balance the benefits and costs of sharing data with services. Another set of concerns is with applications of machine intelligence for control and decision making in high-stakes domains, such as navigating cars and controlling surgical robots.  We need to be assured that systems working in high-stakes areas will behave safely and in accordance with our goals, even when they encounter unforeseen situations.  Given their roles and responsibilities, these systems must be resistant to being hacked with new forms of malware. I’m optimistic that principles of verification and strong software engineering, along with advances in AI around safety and robustness, can take on these new challenges.  However, there is much work to be done.  As we innovate, we must continue to reflect—and be proactive with focused research and recommendations as necessary.

As I look to the future, I feel the same excitement that I’ve harbored since my graduate school days about what we’ll be able to accomplish over the next few decades.  I see great value ahead for humanity coming from advances in machine intelligence.  I especially look forward to new insights and answers to my long-standing questions about the computational foundations of intelligence. I’m impatient to learn and to understand more.  Our work continues. Charge!

Also: Watch the full interview of Eric Horvitz on the future of AI

Microsoft Research

Lees meer...

comments powered by Disqus

Overige NieuwsTips