Week 1: Artificial General Intelligence
Four Background Claims (Nate Soares)
https://intelligence.org/2015/07/24/four-background-claims/
This article goes through four different claims, providing views on those claims and responses to those views. It then discusses why the importance of discussing the particular claim.
The first claim discusses the human ability to solve general problems that span a variety of domains. Refuting this claim would mean that general intelligence doesn't exist and that we can achieve artificial intelligence by designing discrete machines that solve individual tasks. A collective of these machines would form what we perceive as a generally intelligent agent. The article responds to this claim by stating that general intelligence arises due to the interaction of different cognitive faculties. The whole is comprised of components, however, the sum of these components is greater than each individual component. Engineering a system that possesses these interactions to form a general intelligence will provide lots of insights into human intelligence, and why we appear to be dominant over other species.
The next claim looks at the potential for an AI system to be much more intelligent than humans. One could argue that brains are a special piece of machinery and computers will not be able to recreate this. The author of the article responds to this by emphasizing that brains are still physical objects, and so must conform to the laws of physics, therefore, we can replicate them. Another could also argue that general intelligence is too complex to be programmed, and so we will not be able to surpass human-level general intelligence. However, we notice signs of intelligence in other creatures which indicates that intelligence arose from natural selection. As natural selection is a form of genetic programming, there is no reason why we cannot expect to be able to program such intelligence (however, implicit that programming may be). Knowing the capabilities of AI systems is important as they are most likely going to play significant roles in our society at an alarming rate.
Thirdly, the article discusses how a highly intelligent AI system will shape the future. As these systems are significantly smarter than us they could have control over humans. However, as our environment is too competitive you could argue that an AI system will have to work with humans in order to be successful. Historically we have seen a technologically superior group dominant over its rivals. There is no reason why AI will not socially manipulate humans for its own gains. This is an incredibly important topic as the future matters.
Finally, the article proposes the claim that AI systems will not be beneficial by default. Refuting the claim would mean that we would expect an AI to learn to conform to our values and be peaceful with us. However, a highly intelligent system may be capable of self-manipulation of its code and have the ability to alter its own intention. Therefore, it could quickly diverge from the values we instilled in it, pursuing its own goals to the detriment of humans. To maximize our benefits from these systems we need to learn how to align their values with our own.
AGI Safety From First Principles (Richard Ngo)
https://drive.google.com/file/d/1uK7NhdSKprQKZnRjU58X7NLA1auXlWHt/view
In this article, the idea of intelligence and the impact an AGI agent could have on the human race is explored. There is the so-called "Second Species" argument that puts forward the claim that an AGI system will not obey us and humanity will become the second most powerful species. Before the article goes on it defines what it means to be intelligent and the different forms this may take. Intelligence is seen to be the ability to achieve goals in a variety of environments. We have task-based approaches to this, where agents are optimized to achieve individual tasks. Then we have generalization-based approaches, where the agent has the ability to understand a task will minimal specific training.
It is clear that the generalization-based approaches are preferable to developing an AGI agent as this more closely resembles human intelligence. We are trained during childhood and then in adulthood we fine-tune these skills. The ability to abstract allows us to identify common structures between tasks and efficiently learn new ones. In turn, we can build upon the general skills we gained as a child to tackle a host of different problems in adulthood.
However, task-based approaches can get us powerful systems in a short development time when we have lots of data (with the condition that the task is easy to train, some are not). The advantage of generalization-based approaches is that the agent will be able to complete any task, despite its training difficulty.
An agent that can achieve human-level performance on a wide range of tasks is deemed to be generally intelligent.
We can achieve such an agent in different ways. There is no reason to believe however that such an agent will not progress to become superintelligent. A superintelligent agent is one that can exceed the cognitive capacities of the human race as a collective (according to this article). Humans are limited in speed and size whereas a computer system is not. The transition from general intelligence to superintelligence may be facilitated by better...
- ...computing power
- ...algorithms
- ...training data
- Replication - Duplicating an AGI agent so that as a collective it becomes a superintelligent system.
- Cultural Learning - AGI systems learn from each other to solve more complex problems that any individual system may not be able to solve (this is clearly present in human civilizations)
- Recursive Improvements - AGI will be able to improve its training process and its own implementation. Iterative improvements will form positive feedback loops with humans not necessarily in the loop.
Future ML Systems Will Be Qualitatively Different (Jacob Steinhardt)
https://bounded-regret.ghost.io/future-ml-systems-will-be-qualitatively-different/An AI system will cause a range of different phenomena, each arising in different ways and will have varying consequences. For example, we have observed qualitative changes resulting from quantitative changes in scale. When this happens rapidly it is known as a phase transition. There have already been such shifts historically, take the rise of deep neural networks which arose due to an increase in computing power allowing backpropagation to happen efficiently. A phase transition may arise due to:
- Storage and Learning - As memory capacities increase there will be more storage available for different approaches to AI.
- Compute, Data, and Neural Networks - With more computing power and data one can train larger networks. For example, as we gained access to more data we could transition away from hand-coded models to models that learn features.
- Few-Shot Learning - A feature of models that emerge as the models were scaled, it was an unforeseen consequence of our training methods.
- Grokking - The apparent improvement in a network's generalization as it is trained for longer
- a more philosophical view of technological progress will be taken, rather than a more traditional engineering-orientated view.
- future ML systems will have peculiar failure modes that we will have to accommodate and address.
The Bitter Lesson (Rich Sutton)
http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Researchers typically want to develop techniques that are rooted on firm theoretical ground, leveraging our human knowledge to extract the best performance out of our systems. However, in practice, we see the largest gains are achieved by simply scaling up our current models and utilizing phenomena such as Moore's Law. The article makes the case that human knowledge can complicate matters and will actually restrict us from leveraging advances in computation to improve our models. Ultimately, learning is a type of search and we should embrace the massive computations at our disposal to empower this search rather than rely on human-knowledge-based methods. Take the example of deep learning, they have had tremendous success due largely in part to their size and opaqueness.
Developing knowledge into agents can help us to make short-term gains, however, in the long term, we will see the largest gains provided by the increases in computational power. Therefore, we should be building meta-methods into our agents so that when we experience increases in compute power they can capture the complexity of the task at hand.
The "Most Important Century" Blog Post Series (Karnofsky)
https://www.cold-takes.com/most-important-century/
Often we can get caught up in the short-term fluctuations of life that we forget to step back and realize that as a civilization we are growing at a rapid rate. It is important to take this step back so that we can prepare for the radical changes that are inevitable.
We need to have safety in mind as, as history shows, technological advances may cause unjust and initiate suffering in large proportions of our population. For example, creating digital profiles of people may have dystopian ramifications if not controlled properly. If controlled, however, it has the potential to be greatly beneficial.
The power of technological advances is that they cause perpetuating loops of improvement. A powerful feedback loop that has the potential to create economic singularities, which is where a single entity is in possession of the majority of the wealth. Resulting in futures that are radically different from the near past. If in particular the technological advances are not aligned with human values this future may not be very pleasant.
The article makes the claim that PASTA (Process for Automating Scientific and Technological Advancement) technology will be developed this century. As it becomes more affordable to train large models we should expect that we will see such a system in the not-so-distant future. Although we do not know precisely when this will occur. Therefore, we must be preparing now to mitigate any consequences such a system may have. The preparation is not purely focused on restricting such a system so it cannot operate in society. It is more about controlling its deployment so that its positive impacts are harnessed and its negative impacts are suppressed. We need to take robust action to capitalize on technological advances.
Forecasting Transformative AI: The "Biological Anchors" Method In A Nutshell (Holden Karnofsky)
https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/
There have been many attempts to quantify when we will see the emergence of transformative AI. A transformative AI is an AI powerful enough to bring us into a qualitatively new future. These attempts are grounded in a set of hypotheses that are used to map out a timeline of progression toward transformative AI. This article provides a timeline construction using the Biological Anchors approach. Using this approach we expect to see major milestones along the road to transformative AI achieved this century.
The BioAnchors method assumes that such an AI will learn through trial and error and a large model will take more time to train but will be able to solve more complex tasks. As a result, we can construct our timeline by considering when it will become feasible to train an AI model of a size that is sufficient to solve the hardest tasks that humans can achieve. The threshold set by the hypotheses is ten times the size of the human brain.
The article first looks at training. To teach a program a task we can either give it explicit instructions or we can train it through trial and error. The first is a reliable and cheap method to get a program to complete a particular task. The latter is usually more expensive (as it requires brute force) and a lot of examples are needed (which also need lots of processing power). However, it does mean we do not have to reduce tasks to a simple set of programmable rules. The BioAnchors approach determines when it will be affordable to train a model, using trial-and-error, that can excel at the hardest tasks humans can do.
BioAnchors works under the idea that transformative AI will require a model ten times the size of the human brain. There is much speculation on how accurate this idea is, but it is one of the simplifications used by this approach to develop its timeline.
To get an idea of how much it would cost to train a model to complete a particular task BioAnchors relies on the time it would take a human to complete that task. For example, it would be more costly to train an AI with a million tries at a task a human can complete in 100 minutes than to have an AI have 10 million tries at a task a human can complete in 1 minute.
We can think of the feasibility of training such a model as two points on a sliding scale. One point would be the current limit imposed by AI labs and governments to consider training such a model. The other point would be the minimum requirements to train such a model. Improvements in hardware and software will move the second point closer to the first and a growing economy will edge the first point closer to the second. Estimating the rate of closure between these two points, using the underlying assumptions, is the BioAnchors approach and what is used to derive its conclusions.
Some say that the BioAnchors method is too aggressive in its assumptions imposed on the intelligence of the model. For instance, there is no reason to think the current architectures at our disposal would be able to learn everything a human knows, and thus transformative AI will require a new architecture. Furthermore, by simply comparing an AI to its ability to complete human tasks we cannot be certain that the AI is only reproducing patterns it has observed. How do we know that the agent possesses a form of general reasoning? Other concerns about BioAnchors being too aggressive arise from its assumptions on computing power. Under this framework, we hypothesize that if one had access to the necessary power to train a transformative AI model they would and they would do so successfully. Similarly, the BioAnchors method fixes the threshold of required resources to train a transformative model and does not consider the possibility that a new approach to training could decrease the threshold.
On the other hand, some believe the BioAnchors method is too conservative. Who is to say that ten times human brain capacity is the boundary beyond which we get a transformative AI? A much smaller model that only works well on simple tasks may still have a significant impact on society, depending on the simple tasks at which it excels.
Despite the nuances the BioAnchors approach concludes the following timeline:
- 10% chance of transformative AI by 2036
- 50% chance of transformative AI by 2055
- 80% chance of transformative AI by 2100
In the spirit of the BioAnchors approach, some have conducted an evolutionary analysis to construct their own timeline. In this approach, we assume a transformative AI will have to go through a training process equivalent to the computations executed by all the animals in history during natural selection. This is deemed to be a fairly conservative approach and estimates a 50% chance of transformative AI by 2100.
All in all the BioAnchors approach is useful to provide general timings of major milestones in the road to transformative AI, however, by no means should it be used to dictate the rate of policy development and research into AI safety. The article gives some pros and cons to the method.
Pros:
- Forms estimates from objective facts and explicit assumptions
- Some high-level points of Bio Anchors do not depend on any estimates or assumptions
- In a decade or so we will have AI models comparable in size to human brains
- Transformative AI will probably become affordable this century
- We can compare our observations to the framework to update our beliefs
-
Relies on multiple uncertain estimates
- AI can learn the key tasks
- Comparing AI models with the size of human brains
- Characterising task type
- Using model size and task type to estimate the expense
- Estimating advances in hardware and software
- Estimating future increases in AI lab budgets