The Book in 3 Sentences

  • Think like a scientist or engineer, learning to learn is more valuable than memorizing specific knowledges, tech evolves too quick.

Top Quotes

Summary & Reflection

Chapter 1 – Orientation

A clear distinction is drawn between science and engineering:

  • Science seeks to understand nature.
  • Engineering seeks to use that knowledge to solve real-world problems.
    Both require creativity, persistence, and disciplined thinking.

Takeaway: Respect both, but be clear about which mindset you are using

Beyond technical expertise, great scientists and engineers think about how they work:

  • How do I pick problems worth working on?
  • How do I stay motivated?
  • How do I avoid being trapped by conventional thinking?
    This reflective knowledge is what he calls “learning to learn.”

Takeaway: Choose problems that matter. - Don’t waste time on trivial work that won’t make an impact. - Orient yourself toward problems that are important and lasting.

Chapter 2 - The Foundation of the Digital (Discrete) Revolution

Continuous vs. Discrete:

  • For centuries, science and engineering were dominated by continuous mathematics (calculus, differential equations).
  • The digital revolution is built on discrete mathematics — integers, logic, finite states.

Why discrete matters:

  • Discrete systems allow exactness (a 0 is a 0, a 1 is a 1).
  • They are less sensitive to small errors compared to continuous analog systems.
  • This robustness made large-scale computing possible.

The marriage of mathematics and physical electronics (switches, relays, transistors) created the digital world.

Thinking digitally means thinking in terms of bits, states, and exact logic rather than smooth analog variations.

The shift to digital created whole new fields: computer science, information theory, communication technology. It fundamentally changed how engineers design systems and how scientists analyze data.

Chapter 3 - History of Computers - Hardware

Understand that hardware drives possibility

  • From early mechanical devices (the abacus, Babbage’s difference engine, and mechanical tabulators…), they showed that automatic computation was possible even before electronics.
  • Then vacuum tubes (1940s) made the electronic computers possible, eventually replaced mechanical switches, making computation much faster.
  • Then transistors (1950s) replaced the vacuum tubes, made computers smaller, faster and cheaper, commercial computers were born.
  • The integrated circuits (1960s - 70s) combining many transistors on one chip, enabled the creation of mainframes, microcomputers.
  • By 1970s, an entire CPU could fit on one chip, led to the boom of personal computers and the modern computing explosion.

Moore’s law: Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. It explains the exponential growth of computing power — every few years, more power at lower cost.

The deeper points of this hardware evolution is that:

  • Each stage of hardware evolution didn’t just make machines faster — it changed what was possible in science, engineering, and society.
  • The progress wasn’t just technical but conceptual: people began to think differently because of what computers could do. Therefore, each hardware advance created new forms of computation and new fields of study.

Concepts matter more than components

Technologies change, but the principles — logic, representation, control, memory — remain essential.

Expect revolutions, not just evolution

Big shifts (mechanical → electronic → integrated → digital) reshape not just machines but human thought.

Learn from history

Knowing how hardware evolved helps you anticipate where computing might go next (quantum, neuromorphic, etc.).


Reflected on: 2025-10-12