200-2,000 signals per second
Rate at which a typical human neuron can process information.
--ignitarium
Part 1: Introduction
Part 2: The Era of AI: Now and Tomorrow
Part 3: How AI is Transforming Higher Education
Part 4: The Future of Work
The history of AI can be traced to the early 20th century when scientists and engineers began to explore the possibility of creating machines that could think for themselves. One of the earliest milestones in AI research was the development of the Turing test in 1950 by Alan Turing.
The test is simple: a human judge engages in a natural language conversation with two other parties, one of whom is a human and the other a machine. If the judge cannot reliably tell the machine from the human, then the machine is said to have passed the Turing test.
There’s much debate about whether anything has passed the Turing test.
In 2022, for example, Google's AI LaMDA was reported to have passed it, but some experts argued that LaMDA was simply able to exploit the weaknesses of the Turing test.
What we can all agree on is that AI is changing the course of technology and our lives.
Milestones in the history of AI:
Before we look at how AI is primed to solve challenges in higher education, it’s important to define the types of AI we’ll be focusing on.
A type of AI that allows computers to learn without being explicitly programmed.
A type of machine learning that uses artificial neural networks to learn from data.
The ability of machines to understand and process human language.
AI trained on a large corpus of text that can answer questions in natural language.
"In the positive scenario, AI will be doing its best to make you happy. So that might work out pretty well." - Elon Musk, CEO, Tesla and SpaceX
Predictive AI uses historical data to make predictions about future events. For example, businesses use predictive AI algorithms to predict which customers are likely to churn or which products are likely to be purchased by a certain demographic.
Personalized entertainment: Netflix, Amazon Prime, and Spotify customize recommendations based on what someone has watched (and liked) in the past. Since it began giving personalized recommendations in 2009, Netflix has grown from 10 million subscribers to over 220 million. Arguably, some of that growth can be attributed to personalization. Companies that grow faster drive 40 percent more of their revenue from personalization than their slower-growing counterparts, according to McKinsey.
Banking: With predictive AI, fraud detection systems at banks analyze huge amounts of data and identify unusual patterns compared to a person’s historical purchasing and other behavior. In 2021, Bank of America's AI system prevented over $2 billion in fraudulent transactions. This represents a 30% increase over the previous year. The bank estimates that it saved its customers over $100 million in losses due to fraud. And McKinsey estimates that AI will save the financial industry $1 trillion by the end of 2030.
Forecasting: Retailers are using AI-powered forecasting to predict future demand of products. Walmart’s AI system, for example, analyzes historical sales data, weather forecasts, and social media trends to stock its shelves with the right amount of product at the right time. The company estimates that it saved over $500 million in costs due to improved inventory management.
Generative AI refers to a category of artificial intelligence algorithms and models that are designed to create new content or data based on data they have been trained on. Unlike typical AI systems that focus on recognizing patterns and making predictions, generative AI takes things up a notch by making something that didn’t exist before.
Business and creative writing: OpenAI's ChatGPT can write all types of content (from prose to poetry to jokes) and do so in a specified tone, style, grade level, and more. It is being used to create marketing and advertising copy and even business plans. The "GPT" in ChatGPT stands for "Generative Pre-trained Transformer."
Software: GitHub Copilot is an AI tool that helps developers write code faster and more efficiently. When a user starts typing code, GitHub Copilot analyzes the context in the file they’re working in, along with related files, and offers suggestions to complete it. The suggestions are generated by OpenAI Codex, which is able to understand the code the person is writing and generate similar code. As the AI behind GitHub Copilot has evolved, the tool now includes an AI-powered chat interface for asking programming questions and getting explanation, AI-generated descriptions for pull requests, and AI-generated answers to questions about documentation.
Healthcare: Kaiser Permanente launched a groundbreaking ambient AI scribe initiative across 40 hospitals and over 600 medical offices in eight states and Washington D.C. The AI-powered clinical documentation tool, developed by Abridge, transcribes healthcare visits and drafts clinical notes for electronic health records after obtaining patient consent. The initiative aims to reduce documentation burden, increase face-to-face time with patients, and improve clinical workflow efficiency. Within 10 weeks, 3,442 physicians used the tool in 303,266 patient encounters. Most physicians using the system saved an average of one hour per day at the keyboard. And early evaluation metrics indicate that the ambient AI produces high-quality clinical documentation for physicians' editing.
Images and video: Generative AI for images and video has advanced rapidly, with models like DALL-E 3, Midjourney, and Stable Diffusion creating high-quality images from text prompts. Video generation has also progressed, with tools like Runway and Meta's Movie Gen producing short clips from text or images. These AI tools can now handle complex queries, offer customization, and create more coherent results. Features include text-to-video generation, image animation, and AI-powered editing. Major tech companies are entering the field, with Meta, Amazon, and YouTube introducing their own AI video creation tools. While these advancements offer exciting possibilities, they also raise concerns about deepfakes and misinformation, prompting the implementation of identification measures like watermarking.
Computer vision is the ability of machines to see and understand the world around them. For example, computer vision algorithms can be used to identify objects in images or to track the movement of people or objects in real-time.
Retail and customer experience: AI-powered computer vision is enhancing the retail shopping experience. Retailers like Walmart and Kroger for example, are piloting things like smart carts, smart shelves and automated checkout systems. These systems use computer vision to track inventory in real-time, identify when items are misplaced or out of stock, and even analyze customer behavior patterns. Some advanced stores like Amazon Go stores now offer "grab and go" shopping experiences where customers can pick up items and leave without going through a traditional checkout process, with computer vision and AI handling the transaction automatically.
Medical diagnosis: IBM's Watson Health platform uses computer vision to analyze medical images, such as X-rays and MRI scans. While still under development, the technology is proving to be comparable to or even better than human doctors at identifying abnormalities in images that may be indicative of disease. A study published in the journal Nature found that AI was able to identify skin cancer with an accuracy of 95%, compared to 81% for human dermatologists. The algorithm was trained on a dataset of over 130,000 images, including both benign and malignant lesions.
Agriculture: Companies like Blue River Technology are using computer vision to improve crop yields. Blue River was founded by two Stanford graduate students. Its See and Spray machine is attached to tractors and pulled through crop fields to distinguish between weeds and crops, which often look very similar and are indistinguishable to the untrained eye. Their computer vision technology targets only the weeds so that unwanted plants are sprayed with herbicide while the crops are left untouched.
AGI is a hypothetical type of AI that would have the ability to understand and reason like a human being. The “narrow” intelligence of today’s AI is limited to performing a specific task. For example, AI that is designed to play chess will not be able to write poetry.
General AI, on the other hand, could do things like carry on a conversation with a human and understand the nuances of human language. It could also come up with new approaches to solving complex problems, much like humans do.
Another difference between the two types of AI is that while current AI systems are typically trained on large amounts of data that is specific to the task they are designed to perform, AGI would be able to learn from any type of data, regardless of its source.
That means — in theory — that AGI could adapt to new situations and learn new tasks more easily than current AI systems. And it wouldn’t need to be programmed to do so. AGI also wouldn’t be bound by the amount of data it can process or calculations it can complete. That opens up the possibility for AGI to work at speeds far beyond current AI and even beyond human capabilities.
AGI isn’t yet a reality, but it is a goal that many AI researchers are working towards. It’s also one of the most controversial types of AI.
"AGI could be our savior or our destroyer. It's up to us to decide."
—Bill Gates
Rate at which a typical human neuron can process information.
--ignitarium
Rate at which a modern computer processor can process information.
--ignitarium
Rate at which AI systems can process information compared to the human brain.--sciencealert