On the weird things that AIs do.
5 stars
An excellent and hilarious book about the state of actual AI technology in the world (as opposed to the AIs you may see in popular media) and why they can do weird things. As it turns out, the weirdness can be due to the data used to train the AI, in how the AI processes the data and in how we tell the AI to solve a problem for us. You will get a good understanding of how AIs actually work and what they can (and can't) do, and also how AIs can actually help humans do their jobs (or entertain us with hilarious failures).
Chapter one looks at what kinds of AI are featured here. While the public may have some ideas about AI from the popular media, the kinds of AIs looked at here are actual ones in use, which means machine based systems that accept data, apply …
An excellent and hilarious book about the state of actual AI technology in the world (as opposed to the AIs you may see in popular media) and why they can do weird things. As it turns out, the weirdness can be due to the data used to train the AI, in how the AI processes the data and in how we tell the AI to solve a problem for us. You will get a good understanding of how AIs actually work and what they can (and can't) do, and also how AIs can actually help humans do their jobs (or entertain us with hilarious failures).
Chapter one looks at what kinds of AI are featured here. While the public may have some ideas about AI from the popular media, the kinds of AIs looked at here are actual ones in use, which means machine based systems that accept data, apply machine learning algorithms to it, and produce an output. A brief look at how such AI are trained using data and what happens as it gradually learns what kind out output is 'acceptable' is shown. While humans may initially specify what to produce based on the provided input, such AIs may learn and process the data in unexpected ways, leading to weird and unexpected output.
Chapter two looks at what AI systems are now doing. From running a cockroach farm, providing personalized product recommendation to writing news reports and searching scientific datasets, AIs has its uses. The flip side is AI being used to produce things like deepfakes (swapping people's heads or making people appear to do things they didn't). In general, AIs are currently better at very specific tasks (like handing the initial customer support request) and not very good at more general tasks (like creating cooking recipes or riding a bike). One reason is that such general tasks usually require some kind of long term memory to remember what has been done (like when creating long-form essays) but current AIs systems lack this memory capacity. Also, some general tasks create situations that AIs may never encounter in their input training data (like driving safely upon seeing unexpected obstacles).
Chapter three looks at how AIs actually learn by looking at the various types of AI systems: Neural Networks, Markov Chains, Random Forests, Evolutionary Algorithms and Generative Adversarial Networks. Examples of such AIs are given, like the autocorrect system used in smartphones, and their advantage and disadvantages in handling input data, processing it and producing the expected (or unexpected) output.
Chapter four looks at why AIs don't appear to work despite them trying to produce acceptable output. There may several reasons for this: the problem the AI is being asked to solve may be too broad (like making cat pictures after being trained on pictures of people). Or the amount of data provided to the AI may be too little for the task required. The input data may also be too 'messy', full of information not actually required by the task or containing mistakes that confuse the AI.
Chapter five shows how AI complete their task; only it's not the task the designers expected. There are several reasons for this. One is that the AI does the task, only not in the expected way. For example, moving a robot backwards because it was told not activate its bumper sensors (which are located at the front). AIs also usually learn in a simulated environment (to speed up learning), which may lead AIs to exploit 'glitches' in its simulated environment to solve a problem (like 'gaining' energy to jump high by making multiple tiny steps first). Other times, AI solve problems in unexpected ways because the expected learning behaviour is too hard. Like growing a long leg to fall from point A to point B instead of learning to walk (the expected behaviour) as walking is hard.
Chapter six covers more examples of AIs completing tasks in unexpected ways. The main reason for this is that AIs work in a simulated environment, and the solutions it comes up with may only work there. Examples include making optical lenses that are very thick, exploiting mathematical rounding errors or even bugs in the simulation. Such solutions will, of course, not work in the real world.
Chapter seven looks at 'shortcuts' that AIs may take to get the solution. This is usually due to unexpected features found in the training data the AIs may fixate on. For example, an AI trained to recognize a certain type of fish based on images was found to be focusing on fingers in the images instead because in the training data, fingers were always present holding the fish to be recognized. Biases (sometimes hidden) in the input data can also cause the AI to provide biased solutions; for example, recommending hiring only men because in the input data, men were the ones usually hired. Since how the AI comes to a decision is not usually examined, such biased decisions may instead become the norm based on the premise that 'the machine made the recommendation and the machine cannot be biased', not recognizing that bias in the input data maybe the cause of the problem.
Chapter eight considers whether the AI works like the human brain and in general, it is not. AIs have problems remembering things in the long term. For example, passages written by AI tend to meander from topic to topic, generating output that, taken as a whole, is inconsistent. AIs are also prone to 'adversarial attacks' due to it tending to put too much weigh on certain inputs. Examples including modifying an image to mislead an AI recognition program to think a submarine is a bonnet. Or gradually modifying an image of a dog into that of skiers, yet leaving the AI to think it is still looking at a picture of a dog.
Chapter nine looks at the problem of distinguishing between an AI and a person doing a job. This is partially due to hyped up articles that proclaim that AI will be doing certain jobs instead of human (for example, driving). The author provides several ways to probe whether certain output has been produced by an AI or, possibly, a human pretending to be an AI.
Chapter ten looks at the future and shows that the current way forward is a world where both AIs and humans have decision-making jobs to do. AIs can be trained on data, but it is up to humans to determine whether the results are valid and to modify or update the input data to let the AI do a better job. For now, the future is one where both AIs and humans coexists.