Seminar 6


Artificial Intelligence and Machine Learning

What is Artificial Intelligence?

  • The dream: building machines that can think, learn, and act.
  • AI: making computers do things that–when done by humans–require intelligence.
  • Two traditions:
    • Symbolic AI (logic, reasoning, rules)
    • Connectionist AI (neural networks, learning from data)
AI Conceptual Graphic

Defining “Intelligence” in AI

  • Human intelligence includes:
    • Perception (seeing, hearing, sensing).
    • Reasoning (logic, planning, problem-solving).
    • Learning (improving from experience).
    • Action (making decisions in the world).
  • AI systems often focus on narrow slices of intelligence.

Symbolic vs. Connectionist AI

  • Symbolic AI (1950s–1980s):
    • Uses rules, logic, and expert knowledge.
    • Strength: clear reasoning and explanations.
    • Weakness: brittle, hard to adapt.
  • Connectionist AI (1980s–today):
    • Uses neural networks and statistics.
    • Strength: adapts well to messy data.
    • Weakness: hard to explain decisions.

Early Artificial Intelligence (Pre-1950s)

  • Philosophical roots: Descartes, Leibniz, Hobbes → thinking as computation.
  • 1800s: Babbage & Lovelace imagine mechanical computation.
  • 1940s: Alan Turing formalizes the concept of a universal machine.
  • Turing Test (1950): “Can machines think?”
Alan Turing

Turing and Computability

  • Alan Turing (1912–1954):
    • Formalized computability with the Turing Machine.
    • Proposed the Imitation Game (Turing Test).
  • His question reframed AI research: not “can machines think?” but “can they act as if they think?”
  • Foundation of modern computer science + AI.

The Turing Test

What is the Turing Test?

  • Proposed by Alan Turing in 1950 (“Computing Machinery and Intelligence”)
  • Seeks to answer: “Can machines think?”
  • Uses an imitation game instead of defining “thinking” directly

How It Works

  • Participants: A human judge, a human, and a machine
  • Interaction: Text-only conversation (no voice/appearance)
  • Goal: If the judge cannot tell which is the machine, the machine “passes”

Significance

  • Foundational idea in artificial intelligence
  • Shifts focus to observable behavior
  • Sparked debates on intelligence, language, and consciousness

Criticism

  • Tests mimicry, not real understanding

  • Chinese Room Argument (John Searle): passing is not equal to comprehension

    • The Chinese Room Argument is a thought experiment against the Turing Test

    • Imagine a person in a room with a rulebook for manipulating Chinese symbols

    • They receive Chinese input, use the rules, and return correct Chinese output

    • To outsiders, it looks like they understand Chinese

    • But inside, the person doesn’t understand the language–just symbol manipulation

    • The Chinese Room Argument suggests that passing the Turing Test is not equivalent to real understanding or consciousness

  • Modern AI (like ChatGPT) challenges the test’s adequacy

  • Play the turing test online against a human or LLM

The Birth of AI (1950s–1970s)

  • 1956: Dartmouth Conference – the official birth of AI.
  • Optimism: “Machines will rival human intelligence within a generation.”
  • Achievements:
    • Early logic programs (Newell & Simon’s Logic Theorist).
    • Game-playing AI (chess, checkers).
    • Natural language attempts (ELIZA, SHRDLU).
Dartmouth AI Proposal (1955)

Early Success Stories

  • Logic Theorist (1956): proved 38 of 52 theorems from Principia Mathematica.
  • General Problem Solver (1959): aimed for universal reasoning.
  • Game-playing programs: checkers programs learned strategies better than some humans.
  • Set the tone: AI could match or exceed humans in narrow tasks.

Limits of Early AI

  • Overestimated progress:
    • Language understanding far harder than expected.
    • Commonsense reasoning remained elusive.
  • Hardware limited:
    • Very small memory (kilobytes!).
    • Slow processors.
  • Early optimism gave way to frustration.

Image from Dalle-3

The First AI Winter (1970s)

  • Bold promises → disappointing results.
  • Computers were too slow, memory too small.
  • Governments cut funding.
  • Overhype + underdelivery = AI winter.

Why the First Winter Happened

  • Natural language understanding proved intractable.
  • Commonsense reasoning: trivial for humans, impossible for machines.
  • Cost of computing was high; funding dwindled.
  • Media hype made failures more visible.

Consequences of the Winter

  • Research funding shifted to other areas.
  • AI became an “academic backwater.”
  • Yet… important foundations were laid:
    • Search algorithms.
    • Knowledge representation.
    • Early neural network concepts.

Image from Dalle-3

Thawing from the First Winter (1980s)

  • Japan’s “Fifth Generation Computer Project.”

    • Timeline: 1982–1992
    • Goal: Leap beyond conventional computing -> AI-driven, knowledge-based systems
    • Massively parallel architectures (Parallel Inference Machines)
    • Logic programming (Prolog) as foundation
    • Natural language processing, expert systems, theorem proving
    • Advanced parallel computing research
    • New logic programming tools
    • Trained a generation of AI researchers
    • Fell short of ambitious goals
    • Outpaced by conventional computing advances
    • Limited commercial impact, but lasting research influence
  • Renewed funding & optimism.

  • Expert Systems emerge: rule-based decision-making (e.g., medical diagnosis).

Backward Chaining (Expert Systems)

Expert Systems Explained

  • Mimic human experts by encoding if-then rules.
  • Example: MYCIN (1970s) for diagnosing blood infections.
  • Advantages: explainable reasoning.
  • Limitations: hard to maintain, rules explode in number.

The Fifth Generation Project

  • Japan (1982–1992): aimed for computers with human-like reasoning.
  • Focus on logic programming (Prolog).
  • Sparked global competition (U.S., Europe).
  • Ultimately fell short, but drove research and funding.

Image from Dalle-3

The Second AI Winter (1990s)

  • Expert systems proved expensive & brittle.
  • Failed to scale → loss of confidence.
  • Another crash in funding and interest.

Why Expert Systems Failed

  • Knowledge engineering bottleneck:
    • Extracting rules from experts was slow.
    • Systems could not adapt to new situations.
  • Maintenance nightmare: rules conflicted.
  • Costs outweighed benefits.

Shift in Focus During the 1990s

  • Rise of statistical approaches: probability, statistics, data-driven learning.
  • AI blended with computer science fields:
    • Databases.
    • Information retrieval.
    • Robotics.
  • AI rebranded as “machine learning” in many contexts.

Thawing Again (Late 1990s–2000s)

  • New hope: statistical methods & machine learning.
  • Internet explosion = big data.
  • Cheaper, faster hardware.
  • IBM’s Deep Blue defeats Kasparov (1997).
IBM Deep Blue

Deep Blue’s Victory

  • IBM Deep Blue vs. Garry Kasparov (1997).
  • Specialized hardware + search + heuristics.
  • Landmark: machine beats world chess champion.
  • Symbolized AI as practical and competitive.

Big Data Revolution

  • Internet users generated huge amounts of data.
  • AI shifted from hand-coded rules to pattern-finding in data.
  • Statistical learning → recommendation systems, spam detection.
  • More data → better performance.

AI in the Early 2000s

  • Search engines revolutionize knowledge access.
  • Speech recognition improves.
  • AI used in spam filters, recommendation systems, and everyday apps.
Early Google Logo (1998)

Search and Recommendation

  • Google’s PageRank transformed search (late 1990s).
  • Amazon pioneered personalized recommendations.
  • AI became embedded in daily life–often invisibly.

Speech and Language in the 2000s

  • Hidden Markov Models (HMMs) drove progress.
  • Systems like Dragon NaturallySpeaking became usable.
  • Foundation for voice assistants in the 2010s.

Deep Learning (2010s)

  • Inspired by the brain → neural networks with many layers.
  • Requires massive data + GPUs.
  • Achievements:
    • ImageNet breakthrough (2012).
    • Near-human speech recognition.
Neural Network

The ImageNet Moment

  • 2012: AlexNet by Hinton’s team.
  • Deep CNNs cut image classification errors nearly in half.
  • Catalyzed explosion of deep learning research.

The Digit Predictor


Deep Learning Applications

  • Vision: face recognition, medical imaging.
  • Language: machine translation, speech recognition.
  • Games: AlphaGo (2016) beat Go champion Lee Sedol.
  • Became state-of-the-art in many fields.

Self-Driving Cars

  • Early DARPA challenges (2004–2005).
  • Google’s self-driving car (Waymo).
  • Symbol of AI entering the physical world.
Waymo Self-Driving Car

DARPA Grand Challenges

  • 2004: most teams failed; none finished.
  • 2005: Stanford’s “Stanley” won by completing the desert course.
  • Sparked commercial and academic self-driving research.

From DARPA to Waymo

  • Google project launched in 2009.
  • Progressed from retrofitted Priuses to dedicated self-driving cars.
  • Challenges: safety, regulation, ethics.
  • Still an active frontier today.

Transformers and GPT

  • 2017: Transformers revolutionize AI.
  • GPT (Generative Pretrained Transformer): AI that can write, converse, create.
  • A leap in natural language understanding.
Attention (Transformers)

Why Transformers Matter

  • Key innovation: attention mechanism.
  • Handles sequences efficiently.
  • Enables training on massive text corpora.
  • Foundation of modern NLP breakthroughs.

GPT and Generative AI

  • GPT-2 (2019), GPT-3 (2020), GPT-4 (2023).
  • Can generate essays, code, stories, poetry.
  • Raises questions about creativity and authorship.

LLMs in the 2020s

  • Chatbots like ChatGPT become mainstream.
  • AI used in art, code, medicine, education.
  • Raises questions: ethics, bias, jobs, creativity.
ChatGPT Logo

Ethical Challenges of LLMs

  • Biases in training data → biased outputs.
  • Risks of misinformation and “hallucination.”
  • Intellectual property questions: who owns AI-generated text?
  • Need for transparency and accountability.

AI in Society Today

  • Ubiquitous in:
    • Search and recommendation.
    • Healthcare and drug discovery.
    • Creative tools (art, music, writing).
  • Balancing opportunity and risk is key.

The Future of AI

  • Superintelligence? Hype or possibility?
  • Collaboration between humans and AI.
  • Key questions:
    • How do we align AI with human values?
    • How do we regulate and govern AI?
    • What role will YOU play in shaping AI’s future?
Future AI

Possible Futures

  • Optimistic: AI accelerates science, cures diseases, solves climate change.
  • Pessimistic: job loss, surveillance, misuse in war.
  • Balanced: humans + AI collaborate, but with regulation.

Your Role in the Future

  • Every user, policymaker, engineer contributes to AI’s trajectory.
  • Think critically:
    • Where should AI be trusted?
    • Where should humans remain in control?
  • The future is not predetermined–it’s shaped by choices now.

Discussion

  • What excites you most about AI?
  • What concerns you the most?
  • Where do you see yourself in this AI-powered world?