Skip to main content

Table of Contents

Part 1: Introduction

Part 2: The Era of AI: Now and Tomorrow

Part 3: How AI is Transforming Higher Education

Part 4: The Future of Work

Part 5: The AI Strategy and Adoption Plan

Ellipse
Ellipse

The history of AI can be traced to the early 20th century when scientists and engineers began to explore the possibility of creating machines that could think for themselves. One of the earliest milestones in AI research was the development of the Turing test in 1950 by Alan Turing.

The test is simple: a human judge engages in a natural language conversation with two other parties, one of whom is a human and the other a machine. If the judge cannot reliably tell the machine from the human, then the machine is said to have passed the Turing test.

There’s much debate about whether anything has passed the Turing test.

In 2022, for example, Google's AI LaMDA was reported to have passed it, but some experts argued that LaMDA was simply able to exploit the weaknesses of the Turing test.

What we can all agree on is that AI is changing the course of technology and our lives.

Milestones in the history of AI:

  • 1997: Deep Blue, a chess-playing computer developed by IBM, defeats chess champion Garry Kasparov spectrum.ieee.org
  • 2011: Watson wins Jeopardy! nytimes.com
  • 2012: Google gets the first self-driven car license in Nevada reuters.com
  • 2014: Amazon releases the personal assistant Alexa in its Echo speaker theverge.com
  • 2017: Google introduces the Transformer architecture, laying the foundation for large language models Google
  • 2018: Google's DeepMind predicts 3D shapes of proteins, ushering in new era of medical progress theguardian.com
  • 2023: ChatGPT becomes the fastest-growing consumer application in history reuters.com
  • 2023: GPT-4 is released, demonstrating multimodal capabilities and improved performance OpenAI
  • 2023: Meta Platforms introduces consumer-focused generative AI products
  • 2023: Microsoft integrates GPT-4 into Bing Search and Edge browser
  • 2024: Google releases Gemini 1.5 Pro and 1.5 Flash, with advanced text generation and image analysis capabilities
  • 2024: Salesforce unveils AI agents for sales teams, including Einstein SDR Agent and Einstein Sales Coach Agent
  • 2024: HubSpot introduces Agent.AI, a network of autonomous AI agents for marketing, sales, and customer service
  • 2024: Claude 3 family of models is introduced, setting new benchmarks in cognitive tasks AI-Pro

Before we look at how AI is primed to solve challenges in higher education, it’s important to define the types of AI we’ll be focusing on.

Key AI Terms to Know

Machine
learning

A type of AI that allows computers to learn without being explicitly programmed.

Deep
learning

A type of machine learning that uses artificial neural networks to learn from data.

Natural language processing

The ability of machines to understand and process human language.

Language
model

AI trained on a large corpus of text that can answer questions in natural language.

Ellipse
"In the positive scenario, AI will be doing its best to make you happy. So that might work out pretty well." - Elon Musk, CEO, Tesla and SpaceX
Ellipse

Types of AI


Predictive

Predictive AI uses historical data to make predictions about future events. For example, businesses use predictive AI algorithms to predict which customers are likely to churn or which products are likely to be purchased by a certain demographic.

In the world:

Personalized entertainment: Netflix, Amazon Prime, and Spotify customize recommendations based on what someone has watched (and liked) in the past. Since it began giving personalized recommendations in 2009, Netflix has grown from 10 million subscribers to over 220 million. Arguably, some of that growth can be attributed to personalization. Companies that grow faster drive 40 percent more of their revenue from personalization than their slower-growing counterparts, according to McKinsey.

Banking: With predictive AI, fraud detection systems at banks analyze huge amounts of data and identify unusual patterns compared to a person’s historical purchasing and other behavior. In 2021, Bank of America's AI system prevented over $2 billion in fraudulent transactions. This represents a 30% increase over the previous year. The bank estimates that it saved its customers over $100 million in losses due to fraud. And McKinsey estimates that AI will save the financial industry $1 trillion by the end of 2030.

Forecasting: Retailers are using AI-powered forecasting to predict future demand of products. Walmart’s AI system, for example, analyzes historical sales data, weather forecasts, and social media trends to stock its shelves with the right amount of product at the right time. The company estimates that it saved over $500 million in costs due to improved inventory management.

In higher education:

  • Ivy Tech Community College developed an algorithm that can predict a student's final grade in a course with 60%-70% accuracy by week two of the semester. This led to 3,000 students receiving the intervention they needed to pass their classes. Forbes

Generative

Generative AI refers to a category of artificial intelligence algorithms and models that are designed to create new content or data based on data they have been trained on. Unlike typical AI systems that focus on recognizing patterns and making predictions, generative AI takes things up a notch by making something that didn’t exist before.

In the world:

Business and creative writing: OpenAI's ChatGPT can write all types of content (from prose to poetry to jokes) and do so in a specified tone, style, grade level, and more. It is being used to create marketing and advertising copy and even business plans. The "GPT" in ChatGPT stands for "Generative Pre-trained Transformer."

  • Generative: The new text ChapGPT creates based on the patterns it learned during training.
  • Pre-trained: ChatGPT learned the statistical relationship between words and phrases by being trained on a massive dataset of text and code.
  • Transformer: This is the type of architecture that the model uses. Transformers are a type of neural network that are particularly well-suited for natural language processing tasks.

Software: GitHub Copilot is an AI tool that helps developers write code faster and more efficiently. When a user starts typing code, GitHub Copilot analyzes the context in the file they’re working in, along with related files, and offers suggestions to complete it. The suggestions are generated by OpenAI Codex, which is able to understand the code the person is writing and generate similar code. As the AI behind GitHub Copilot has evolved, the tool now includes an AI-powered chat interface for asking programming questions and getting explanation, AI-generated descriptions for pull requests, and AI-generated answers to questions about documentation.

Healthcare: Kaiser Permanente launched a groundbreaking ambient AI scribe initiative across 40 hospitals and over 600 medical offices in eight states and Washington D.C. The AI-powered clinical documentation tool, developed by Abridge, transcribes healthcare visits and drafts clinical notes for electronic health records after obtaining patient consent. The initiative aims to reduce documentation burden, increase face-to-face time with patients, and improve clinical workflow efficiency. Within 10 weeks, 3,442 physicians used the tool in 303,266 patient encounters. Most physicians using the system saved an average of one hour per day at the keyboard. And early evaluation metrics indicate that the ambient AI produces high-quality clinical documentation for physicians' editing.

Images and video: Generative AI for images and video has advanced rapidly, with models like DALL-E 3, Midjourney, and Stable Diffusion creating high-quality images from text prompts. Video generation has also progressed, with tools like Runway and Meta's Movie Gen producing short clips from text or images. These AI tools can now handle complex queries, offer customization, and create more coherent results. Features include text-to-video generation, image animation, and AI-powered editing. Major tech companies are entering the field, with Meta, Amazon, and YouTube introducing their own AI video creation tools. While these advancements offer exciting possibilities, they also raise concerns about deepfakes and misinformation, prompting the implementation of identification measures like watermarking.

In higher education:

  • Generative AI is transforming higher education in various ways. For example, at the University of South Florida, Professor Sanghoon Park created an AI-powered chatbot to provide motivational messages and academic support to students in his online class. And schools such as American University's Kogod School of Business have updated their curriculum to include prompt engineering, programming, and AI/ML models, preparing students for an AI-driven workforce. InsideHigherEd

Computer Vision

Computer vision is the ability of machines to see and understand the world around them. For example, computer vision algorithms can be used to identify objects in images or to track the movement of people or objects in real-time.

In the world:

Retail and customer experience: AI-powered computer vision is enhancing the retail shopping experience. Retailers like Walmart and Kroger for example, are piloting things like smart carts, smart shelves and automated checkout systems. These systems use computer vision to track inventory in real-time, identify when items are misplaced or out of stock, and even analyze customer behavior patterns. Some advanced stores like Amazon Go stores now offer "grab and go" shopping experiences where customers can pick up items and leave without going through a traditional checkout process, with computer vision and AI handling the transaction automatically.

Medical diagnosis: IBM's Watson Health platform uses computer vision to analyze medical images, such as X-rays and MRI scans. While still under development, the technology is proving to be comparable to or even better than human doctors at identifying abnormalities in images that may be indicative of disease. A study published in the journal Nature found that AI was able to identify skin cancer with an accuracy of 95%, compared to 81% for human dermatologists. The algorithm was trained on a dataset of over 130,000 images, including both benign and malignant lesions.

Agriculture: Companies like Blue River Technology are using computer vision to improve crop yields. Blue River was founded by two Stanford graduate students. Its See and Spray machine is attached to tractors and pulled through crop fields to distinguish between weeds and crops, which often look very similar and are indistinguishable to the untrained eye. Their computer vision technology targets only the weeds so that unwanted plants are sprayed with herbicide while the crops are left untouched.

In higher education:

  • Computer vision also applies to the learning experience. Facial emotion analysis, for example, gauges student engagement and emotional responses during lectures or online learning sessions. Computer vision can also streamline the attendance-taking process by automatically monitoring whose in attendance or not, saving time for instructors and ensuring accurate records. Crowd analysis is another example. In large lecture halls or campus events, for instance, computer vision can analyze crowd dynamics to optimize space utilization and ensure safety.

Artificial General Intelligence

AGI is a hypothetical type of AI that would have the ability to understand and reason like a human being. The “narrow” intelligence of today’s AI is limited to performing a specific task. For example, AI that is designed to play chess will not be able to write poetry.

General AI, on the other hand, could do things like carry on a conversation with a human and understand the nuances of human language. It could also come up with new approaches to solving complex problems, much like humans do.

Another difference between the two types of AI is that while current AI systems are typically trained on large amounts of data that is specific to the task they are designed to perform, AGI would be able to learn from any type of data, regardless of its source.

That means — in theory — that AGI could adapt to new situations and learn new tasks more easily than current AI systems. And it wouldn’t need to be programmed to do so. AGI also wouldn’t be bound by the amount of data it can process or calculations it can complete. That opens up the possibility for AGI to work at speeds far beyond current AI and even beyond human capabilities.

AGI isn’t yet a reality, but it is a goal that many AI researchers are working towards. It’s also one of the most controversial types of AI.

"AGI could be our savior or our destroyer. It's up to us to decide."
—Bill Gates

Fast Facts

The human brain operates at about 1 exaFlop while consuming only 20 watts of power, compared to supercomputers consuming megawatts for similar processing power.

200-2,000 signals per second

Rate at which a typical human neuron can process information.
--ignitarium

Billions of signals per second

Rate at which a modern computer processor can process information.
--ignitarium

100 million times faster

Rate at which AI systems can process information compared to the human brain.--sciencealert