An event about AI, and not only about techniques in AI but also about ethics, humanitarian aid and many other topics. A good starting point for us to look back at the state of AI in 2017.
The world seems to be divided in two camps (and a maybe even a third, more about that later). There is the camp that is pro AI, whom embrace the technique and the possibilities it offers not only us in the Western World, but the whole world. Then there is the camp that is not so happy about this, in fact, even scared on the threat of world domination of robots we created ourselves. We believe the truth lies in the middle.
From buzzword to ...
AI seems to become just another buzzword, perhaps even bigger than ‘big data’. It’s been around a long time, but it seems that this year AI and the ‘applications’ of AI have reached a broader audience. Consequences of the awesomeness we are able to achieve might even be bigger than the internet itself. Currently, we’re only scratching the surface of what AI can do and mean for us. But what exactly is that current state in practice?
It seems most initiatives are focussed around Machine Learning (ML) and Predictive Models (PM). Actually PM is something we see a lot in marketing these days. Predicting what a consumer will buy next, or where their interests lie. Amazon, for example, has been working on their patented "anticipatory shipping” for while. This would allow Amazon to send items to shipping hubs in areas where it believes the item will sell and in the future actually already delivering products Amazon predicts you want to have, before you ordered them.
Ebay for that matter created a shopping assistant with Facebook Messenger that is unlocking intent based on photo’s. Users can send the picture of an item they wish to purchase and the assistant will look for the best fitting item in stock using ML and image recognition. “AI is the thriving power of future Ebay use” says RJ Pittmann, SVP and Chief Product Officer at Ebay.
AI and humans; a match made in space?
Steve Chien from NASA was one of the key speakers at the World AI Summit 2017. Steve and his team are working on implementing AI in their spacecrafts for research purposes. Right now they are working on embedding AI into spacecrafts researching Mars. The planet is covered with a red layer of dust and therefore researchers are mainly interested in what lies beneath. They are developing spacecrafts that are programmed to pinpoint locations on an object using a laser to cut through the dust and dirt to the surface that lies beneath. In order to not waste precious time and to work as efficiently as possible, NASA is relying on AI to take care of what humans normally would have to program into the crafts before the launch. A second project involves the automatic planning system for rovers that will be launched in 2020 to optimise the research time. Actually spreading a swarm of little rovers that are connected and working together autonomously in exploring Mars. Pretty exciting stuff considering our interest in Mars and AI!
These are just a few of the new projects NASA is currently working on.
“The problem of AI lies not with the machines we create, but with the humans that actually control them” says Joanna J. Bryson, Associate Professor at the University of Bath. She actually gave 2 sessions at the WSAI, one panel discussion at the main stage and one on the Bias of AI in a parallel session. This pinpoints the exact discussion on AI. People intent to be afraid of things they do not know or understand. AI is not something that just grew out of a plant or anything. It is something we, human beings, created. Therefore we should be accountable for what we created. Professor Bryson also wrote a lot of interesting stuff on ethics in AI. Consider it a must read if you are in any way working on AI projects.
Although a lot of people fear a dystopian future of a world run by AI robots, mostly fed by Hollywood movies such as the ‘Terminator’, ‘I, Robot’ and ‘Ex Machina’, it is not something that is going to happen somewhere in the future; it is already our reality. Nasa has spacecrafts orbiting our planet that have been operating on AI for over a decade already. And still none of them have been working on the destruction of humans. Why not? Because even though the crafts are self-learning, this doesn’t mean we don’t control them anymore.
The ethics of it all
When entering the discussion of the fact that technique is not the issue, but humans determine the use of AI, you quickly turn into a discussion about the ethical use of AI. Meredith Wittaker, who is the definition of girl power by the way, addressed this at the WSAI17 and warned us in advance that her story would be a little pessimistic. Not because she is not positive about AI, but more for what we, humans, could and should do with it in terms of ethics.
The basis of any AI solution should be a solid dataset taking into account important ethics, like equality or decent values. Not the free datasets that are available all over the place. Why not use them? In terms of unethical outcomes using free datasets you can think of AI assistants or AI driven processes making unethical decisions that are for example based on bias in the dataset. Or making correlations between variables that aren’t correct (such as in image below). As developers, AI enthusiasts and preachers of this amazing thing we have going on here, we should feel accountable for the things we built and release into the world.
Source: Spurious Correlations
Besides humans learning how work with AI and being ethical about it, it is important to gain global adaptation of this technique in order to make it work fór us, instead of against or not working at all.
During the session of the United Nations held by Iraklion Beridze it was obvious we are nowhere near world domination. According to Iraklion only 2 of 193 (!) nations in the General Assembly acknowledged the importance of AI and its possibilities once adapted globally. Iraklion explained that the AI and Robotics department of the UNICRI (which is located in The Hague) has limited budgets to create awareness and training. Surprising enough, the budget was spent on creating a course for Media Professionals first, instead of a more broader and effective approach in, let’s say, education for students? Nevertheless the idea of the UN to use AI to reach the Sustainable Development Goals is a noble challenge and one step into adaption of AI by global leaders.
In this light the urge to legislate and control techniques like AI (but also Blockchain i.e.) keeps popping up too. The World Economic Forum has made it their mission to legislate AI. Now, it is only human to try to understand and legislate new, mindblowing, techniques, but should we? Won’t it just slow down the amazing developments that are going on in this moment in time? And what about the self regulating techniques as i.e. Blockchain? How would you regulate that?
The state of AI @ WSAI 2017
We saw some awesome new development in techniques at the WSAI 2017, some good examples of consumer solutions with AI and some great new start-ups with good ideas on how to use AI to improve the world. But overall there was nothing really new. Global leadership fails to highlight the importance of AI for the world. And also from a consumer and development perspective we did not learn anything new. And that is exactly where we stand. People are (finally) acknowledging this revolution we are in, which might be bigger than the birth of the internet. It takes time to fully understand the possibilities of what mankind created and after the WSAI 2017, where over 2000 AI professionals participated, one thing is clear; there is still a long way to go. Not so much on the commercial side, since companies are adapting and developing their own AI assistants, E-commerce platforms, etc. but ever the more on the global leadership side and the awareness on the accountability of what you created.
Nevertheless future development makes us eager to find out what 2018 has to offer us! Bring it on, we are ready!