Artificial intelligence is mastery of language. Should we trust what he says?

But as the fluency of GPT-3 has impressed many observers, the big language model approach has also attracted significant criticism over the past few years. Some skeptics argue that the software is only capable of blind imitation – that it imitates the grammatical patterns of human language but is unable to generate its own ideas or make complex decisions, a fundamental limitation that would prevent the LLM approach from maturing into anything resembling human intelligence. For these critics, GPT-3 is the latest brilliant object in a long history of AI hype, directing research money and attention to what will ultimately prove to be a dead end, preventing other promising approaches from maturing. Other critics believe that programs like GPT-3 will forever be compromised by biases, propaganda, and misinformation in their trained data, meaning their use of anything more than salon tricks will always be irresponsible.

Wherever you get to this debate, the pace of recent improvement in large language models makes it hard to imagine that they will not be deployed commercially in the coming years. This raises the question of how the world should be unleashed – and, in this respect, the other massive advances of artificial intelligence. In the rise of Facebook and Google, we’ve seen how dominance in a new world of technology can quickly lead to an astonishing power over society, and artificial intelligence threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of this scale and ambition, with such promise and the potential for abuse?

See also  PS5 Transaction Codes 2022: The full list, all the answers, and how to redeem

Or should we build it at all?

OpenAI Origins It dates back to July 2015, when a small group of tech stars gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the emblematic heart of Silicon Valley. The dinner was held amid two recent developments in the world of technology, one positive and one disturbing. On the other hand, radical advances in computational power—and some new breakthroughs in neural network design—have created a palpable sense of excitement in the field of machine learning. It was felt that the long “AI winter,” the decades in which the field failed to live up to the early hype, was finally beginning to thaw. A group at the University of Toronto trained a program called AlexNet to identify classes of objects in images (dogs, castles, tractors, tables) with a much higher level of accuracy than any neural network had previously achieved. Google quickly swooped in to hire AlexNet creators, while at the same time acquiring DeepMind and starting its own initiative called Google Brain. The mainstream adoption of smart assistants like Siri and Alexa has shown that even written agents can be great customers.

But during that same time period, there has been a seismic shift in public attitudes toward the big tech companies, as once-popular companies like Google or Facebook have been criticized for their near-monopolistic power, amplification of conspiracy theories and ruthless sucking of our attention toward algorithm feeds. Long-term concerns about the dangers of AI have surfaced in the editorial pages and on the TED platform. Oxford University’s Nick Bostrom has published his book Superintelligence, in which he presents a set of scenarios under which advanced AI might deviate from humanity’s interests, with potentially dire consequences. In late 2014, Stephen Hawking told the BBC That “the development of full artificial intelligence could spell the end of the human race.” It seemed as if the corporate consolidation cycle that marked the social media age was already happening with AI, but only this time, the algorithms might not just be sowing polarization or selling our attention to the highest bidder — they could end up destroying humanity itself. Once again, all the evidence indicated that this power would be controlled by a few giant corporations in Silicon Valley.

See also  Random: Elden Ring who just got kicked out of Mario Odyssey tops OpenCritic's 'best game' list

The agenda for dinner on Sand Hill Road on a July night was nothing if not ambitious: figuring out how best to direct AI research toward the most positive outcomes possible, and avoiding the negative short-term consequences that have marred the web 2.0 era and long-term existential threats. From that dinner, a new idea began to take shape—one that quickly became a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who had recently left Stripe. Interestingly, the idea was not so much technical as it was organizational: if AI is to unleash the world in a safe and useful way, it will require innovation in terms of governance, incentives, and stakeholder participation. The technical path for what the field calls Artificial General Intelligence, or AGI, was not yet clear to the group. But the troubling predictions of Bostrom and Hawking convinced them that the realization of AI by AI would foster an astonishing amount of power and moral burden in whoever eventually invented and controlled it.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman signed on as CEO of the company, with Brockmann overseeing the technology; Another dinner attendee, AlexNet co-founder Elijah Sotskever, has been hired by Google to be the head of research. (Elon Musk, who was also present at the dinner, joined the board, but left in 2018). Blog postBrockmann and Sotscover set out the scope of their ambition: “OpenAI is a non-profit artificial intelligence research company,” they wrote. They added: ”Our goal is to develop digital intelligence in a way that is most likely to benefit humanity as a whole, and is not constrained by the need to generate a financial return. And in the spirit of freedom, as widely and equally as possible. ”

See also  Microsoft's new Windows app "One Outlook" has started to leak

OpenAI founders will release a file General Charter Three years later, articulating the basic principles behind the new organization. The document has been easily interpreted as an inaccurate research into Google’s “don’t be evil” mantra from its early days, an acknowledgment that maximizing the social benefits – and minimizing harm – of new technology wasn’t always a simple arithmetic. While Google and Facebook have both reached global dominance through closed source algorithms and proprietary networks, the founders of OpenAI promised to go the other way, freely sharing new research and code with the world.

Leave a Reply

Your email address will not be published.