An AI reading list
My open tabs, gathered
Here are all the tabs I’ve been keeping open so I can get around to reading everything I can about AI. (Which your stuffier pubs call “A.I.,” like they still call IBM “I.B.M,” which is annoying to type.) I’ve put them here so all of us can put off reading them as a team. Or maybe break rank and read a few.
The whole time I compiled this, which took way too long, I kept thinking how this should be something I job out to my own personal AI. Hey Sandy! (That’s his or her name.) Take all the open tabs in the browser window I reserve just for AI stuff, and make me a bulleted and linked list of them, each with a pull quote that teases readers to click on the link.
Thanks for reading Reality 2.0 Newsletter! Subscribe for free to receive new posts and support my work.
Will we get that kind of personal help before AI goes viral in the disease sense, and the doomsayers’ worst fears come true? I say that’s Job One. Job Zero for me will be writing about it in the next Reality 2.0 newsletter.
Meanwhile, here ya go, in the order they appeared in my browser window:
Too Big to Challenge?—Reflections on the arrangement of tech and the economy in light of AI, by danah boyd. “…more and more, I'm thinking that obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.”
‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times. “For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.”
Yuval Noah Harari argues that AI has hacked the operating system of human civilisation | The Economist. “Storytelling computers will change the course of human history, says the historian and philosopher.” (Paywalled, but you get it.)
Is the Wolf Really at the Door? By Micah Sifry in The Connector. “After so many false alarms and overheated claims about this or that technology being on the verge of ‘changing everything,’ it’s possible we have lost our ability to distinguish real danger from hype.”
LLMs break the internet. Signing everything fixes it. By Gordon Brander. “You thought the internet was a mess before? Get ready for bots that beat the Turing test, synthesize your voice, generate fake social consensus at scale. We’re seeing the beginnings of this already. Expect a tidal wave of spam, identity theft, phishing, ransomware over the next 36 months. The Dead Internet Theory wasn’t wrong, just early.” And, “The first best thing that self-sovereign cryptographic signatures give us is a way to build trusted networks together. I know you, you know me. We exchange keys. From then on, I know it’s you I’m talking to, not a bot impersonating you.”
AudioGPT — A Glimpse into the Future of Creating Music | by Max Hilsdorf | May, 2023 | Towards Data Science. A research paper, explained.
What Are Transformer Models and How Do They Work? | By Luis Serrano in Cohere. “…transformers are so incredibly good at keeping track of the context, that the next word they pick is exactly what it needs to keep going with an idea.”
Google "We Have No Moat, And Neither Does OpenAI". “Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI.”
AI Is Coming for Your Web Browser. Here’s How to Use It | WIRED. “Microsoft Edge and other browsers have baked-in powerful tools to help you write emails, generate images, and more.”
Do Not Fear Artificial Intelligence, by Francine Hardaway. “Remember Khan Academy? It is used by almost all schools now, and Sal Khan, the young founder, decided that rather than be displaced by artificial intelligence, he would use it to provide every child an individual tutor. He has launched KhanMigo, a chat bot for education, and students can use it to answer specific questions and get information about subjects that are giving them problems. You should watch his recent Ted talk if you are a parent, or even if you aren’t, because artificial intelligence can end the problems of restless boys in the classroom, parents who want control over what their children learn and read and gifted kids who are held back by classrooms full of kids that need to catch up before the they need to go ahead.”
Stability AI. Play with it. Also DALL-E and Midjourney.
Inflection AI introduces new ChatGPT-like personal AI chatbot ‘Pi’
The company is now working on Pi’s boundary training and fine tuning to expand its capabilities.
AI, NIL, and Zero Trust Authenticity – Stratechery by Ben Thompson. Worth it for many reasons, but the best visual is U.S. Recorded Music Revenues By Format.
We Interviewed the Engineer Google Fired for Saying Its AI Had Come to Life. "There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them."
Inside OpenAI [Entire Talk] - YouTube. Ilya Sutskever, Co-Founder & Chief Scientist, OpenAI. At Stanford, April 19, 2023.
Technology | The New Yorker. A list of recent and relevant tech pieces.
Artificial Intelligence: doomsday for humanity, or for capitalism? | Science & Technology | World. “Daniel Morley, examines the claim that AI is ‘conscious’ or ‘superhuman’, draws out the real potential for this technology, and explains how we are really enslaved by the machine under capitalism.”
What is artificial intelligence? AI glossary of terms to know - The Washington Post. Also titled, “A curious person’s guide to artificial intelligence:
Everything you wanted to know about the AI boom but were too afraid to ask.”
A radical new idea for regulating AI - POLITICO. “When you start talking about humanity’s future, there’s another reason that tying the data LLMs use back to the original source could be important. Some thinkers in the data dignity movement argue that the risk to privacy posed by AI is existential, making it imperative that internet users have control over how personal data, like their speech patterns, written syntax, and even their gait are used.”
What Really Made Geoffrey Hinton Into an AI Doomer | WIRED. “The AI pioneer is alarmed by how clever the technology he helped create has become. And it all started with a joke.”
OpenAI’s CEO Says the Age of Giant AI Models Is Already Over | WIRED. “Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.”
How I Built WritingGPT, a Fully Automated AI Writing Team | by Thomas Smith | The Generator | April 2023 | Medium. “It writes articles that rank on Google for about $1 each”
Generative AI is a legal minefield, in Axios. “Why it matters: The courts will have to sort out knotty problems like whether AI companies had rights to use the data that trained their systems, whether the output of generative engines can be copyrighted, and who is responsible if an AI engine spits out defamatory or dangerous information.”
AI gains “values” with Anthropic’s new Constitutional AI chatbot approach | Ars Technica. “List of guiding AI values draws on UN Declaration of Rights—and Apple's terms of service.”
Hollywood’s Screenwriters Are Right to Fear AI | WIRED. “The Writers Guild of America’s demands for guardrails on artificial intelligence are a smart move—and the stakes are higher than ever.”
AI gets its education from everything we ever wrote for the web | Washington Post, Scott Rosenberg. “The AI boom is built on data, the data comes from the internet, and the internet came from us…A Washington Post analysis of one public data set widely used for training AIs shows how broadly today's AI industry has sampled the 30-year treasury of web publishing to tutor their neural networks. “
The Age of AI has begun | Bill Gates. “Artificial intelligence is as revolutionary as mobile phones and the Internet.”
From AI to abortion, the scientific failure to understand consciousness harms the nation | Erik Hoel in The Intrinsic Perspective. “…without a scientific understanding of consciousness, we cannot make ethical decisions regarding artificial intelligence”
Facebook's Powerful Large Language Model Leaks Online | Vice. “Facebook’s large language model, which is usually only available to approved researchers, government officials, or members of civil society, has now leaked online for anyone to download. The leaked language model was shared on 4chan, where a member uploaded a torrent file for Facebook’s tool, known as LLaMa (Large Language Model Meta AI), last week.”
Inside the secret list of websites that make AI like ChatGPT sound smart | The Washington Post. “To look inside this black box, we analyzed Google’s C4 data set, a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA. (OpenAI does not disclose what datasets it uses to train the models backing its popular chatbot, ChatGPT)”
Opinion | A.I.: Actually Insipid Until It’s Actively Insidious | Maureen Dowd in The New York Times. “To look inside this black box, we analyzed Google’s C4 data set, a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA. (OpenAI does not disclose what datasets it uses to train the models backing its popular chatbot, ChatGPT).”
The Needed Executive Actions to Address the Challenges of Artificial Intelligence - Center for American Progress. “ The policy issues and recommendations below apply to currently available automated systems—with special consideration of LLM-based AI applications—and with an eye to other forms of advanced AI on the horizon.”
The approaching tsunami of addictive AI-created content will overwhelm us, by Charles Arthur. “We're unready for the coming deluge of video, audio, photos, and even text generated by machine learning to grab and hold our attention…Remember Arthur C. Clarke’s comment that “any sufficiently advanced technology is indistinguishable from magic”. The magic is among us now, seeping into the everyday. The tide is rising. But the real wave is yet to come.”
Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’ | Jaron Lanier | The Guardian. “From my perspective…the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”
Why GPT4 Isn't Ready For Data Insight Primetime (yet). - StJohn Deakins with CitizenMe. “Transparency about the provenance and veracity of data is essential. Doing this in a ‘human first’ way, with ethics baked in, is foundational. Enabling people to participate with their own data in these models will be transformational.”
ChatGPT Gets Its “Wolfram Superpowers”!—Stephen Wolfram Writings. “I see what’s happening now as a historic moment. For well over half a century the statistical and symbolic approaches to what we might call ‘AI’ evolved largely separately. But now, in ChatGPT + Wolfram they’re being brought together. And while we’re still just at the beginning with this, I think we can reasonably expect tremendous power in the combination—and in a sense a new paradigm for ‘AI-like computation’, made possible by the arrival of ChatGPT, and now by its combination with Wolfram|Alpha and Wolfram Language in ChatGPT + Wolfram.”
OpenAI’s hunger for data is coming back to bite it | By Melissa Heikkilä in MIT Technology Review. “The company’s AI services may be breaking data protection laws, and there is no resolution in sight.”
The Only Way to Deal With the Threat From AI? Shut It Down | Time. “The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence.”
Annals Of Artificial Intelligence News, Opinion, and Analysis—The New Yorker
Artificial Intelligence (A.I.) | The New Yorker
Will A.I. Become the New McKinsey? By Ted Chiang. “As it’s currently imagined, the technology promises to concentrate wealth and disempower workers. Is an alternative possible?”
There is No A.I. By Jaron Lanier. “There are ways of controlling the new technology—but first we have to stop mythologizing it.”
What Kind of Mind Does ChatGPT Have? By Cal Newport. “Large language models seem startlingly intelligent. But what’s really happening under the hood?”
ChatGPT is a Blurry JPEG of the Web. By Ted Chiang. “OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?”
A Photographer Embraces the Alien Logic of A.I. By Chris Wiley. “Charlie Engman’s experiments with Midjourney have yielded fleshy distortions, peculiar make-out sessions, and unfamiliar pictures of his mother.”
What We Still Don’t Know About How A.I. is Trained. By Sue Halpern. “GPT-4 is a powerful, seismic technology that has the capacity both to enhance our lives and diminish them.”
Bing and the Dawn of the Post-Search Internet. By Kyle Chayka. “So much of the current Web was designed around aggregation. What value will legacy sites have when bots can do the aggregation for us?”
Is A.I. Art Stealing from Artists? By Kyle Chayka. “According to the lawyer behind a new class-action suit, every image that a generative tool produces ‘is an infringing, derivative work.’”
What a Sixty-Five-Year-Old Book Teaches Us About A.I. By David Owen. “Rereading an oddly resonant—and prescient—consideration of how computation affects learning.”
“It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT. By Andrew Marantz. “The large language model discusses bullshit, rogue A.I., and the nature of beauty.”
Pond brains and GPT-4 - by Gordon Brander - Subconscious. “We are faced with a surprising fact. If you predict the next token at large enough scale, you can generate coherent communication, generalize and solve problems, even pass the Turing Test. So is this actually thinking?”
How A.I. could help democracy. By Bruce Schneier, Henry Farrell, and Natha E. Sanders in Slate. “The next generation of A.I. experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public A.I. option.”
AI_JointStatement_EN_0427. By David Farber and Shumpei Kumon. “Considering the current innovativeness of generative AI and its great potential, we believe it is extremely important for users to be fully aware of the capabilities and limitations of AI before making use of it. We do not take the position of unconditionally and comprehensively prohibiting the use of generative AI. AI providers should disclose and provide sufficient information to users in a timely manner in accordance with the ‘AI Governance Guidelines’ mentioned above.”
OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use | By Adam Piltch in Tom's Hardware. “If you want to try it while it's still there, check out GPT4Free on GitHub (opens in a new tab). If you want to set up your own ChatGPT-like chatbot without potentially running afoul of OpenAI's lawyers, see our tutorial on how to run ChatGPT on a Raspberry Pi or PC.”
Rethinking democracy for the age of AI | By Bruce Schneier in CyberScoop. “We need to recreate our system of governance for an era in which transformative technologies pose catastrophic risks as well as great promise.”
Google I/O 2023: Google Adds Generative AI to Search | WIRED. “At Google’s annual I/O conference today, the search giant announced that it will infuse results with generative artificial intelligence technology similar to that behind ChatGPT. The company is launching an experimental version of its prized search engine that incorporates text generation like that powering ChatGPT and other advanced chatbots.”
MIT Technology Review: The open-source AI boom is precarious because it is built on top of giant models like LLaMA and GPT-3, and could collapse if Meta and OpenAI decide to shut shop, by Will Douglas Heaven. “Greater access to the code behind generative models is fueling innovation. But if top companies get spooked, they could close up shop.”
Credit for the image up top goes to OpenAI’s DALL-E2, answering the prompt, “Render the word AI entirely in tabs such as one sees at the top of a browser window.” (Or something like that.)
Thanks for reading Reality 2.0 Newsletter! Subscribe for free to receive new posts and support my work.