All articles

The Internet's Hidden Architecture: How Data Actually Travels Around the World

You press Enter and a webpage loads in 200 milliseconds. In that interval, your data has travelled through a fibre-optic cable under the Atlantic Ocean, been routed through a dozen autonomous systems, and been reassembled from dozens of separate packets. Here is the full story.

A

Admin

Author

7 April 20269 min read13 views00

The illusion of instantaneity

The modern internet has been engineered to feel like magic. You type a URL, press Enter, and a richly rendered webpage, assembled from resources stored on servers potentially on another continent, appears in under a second. Nothing in the experience hints at the extraordinary complexity underneath: the physical infrastructure spanning ocean floors, the routing decisions made thousands of times per second, the protocols that reassemble fragmented data with perfect fidelity.

Most people, reasonably, do not think about any of this. They should know some of it, because the internet's architecture shapes not just how the web works but what is possible on it, who controls it, and what its vulnerabilities are.


Packets, not circuits

The first thing to understand is that the internet is a packet-switched network, not a circuit-switched one. This distinction matters enormously.

The telephone network — at least in its classical form — is circuit-switched. When you call someone, a dedicated physical path is established between your phone and theirs for the duration of the call. That path is yours alone; nobody else uses it. This is reliable and simple, but massively inefficient: vast amounts of capacity sit idle whenever there are pauses in conversation.

The internet does something different. When you request a webpage, your computer does not establish a dedicated connection to the server and hold it for the duration. Instead, the data is broken into packets — small chunks, typically up to about 1,500 bytes each. Each packet carries its own header: a source address, a destination address, sequence number, and error-correction data. Then the packets are released into the network, where they may take entirely different routes to the destination and arrive out of order.

At the other end, the Transmission Control Protocol (TCP) reassembles them in the correct sequence, notices if any packets were lost, and requests retransmission. The webpage loads as though it was delivered in one piece. It was not.

This architecture — proposed by Paul Baran at the RAND Corporation in 1964, partly with nuclear-war resilience in mind — is why the internet is so robust. There is no single path to sever. If a router fails, packets route around it.


The physical layer: it is mostly cables

There is a persistent myth that most internet traffic travels by satellite. It does not. Approximately 99% of international internet traffic travels through undersea fibre-optic cables. There are over 400 submarine cable systems, totalling more than 1.3 million kilometres of cable, crossing every ocean and connecting virtually every inhabited region on Earth.

These cables are engineering marvels. A typical modern submarine cable is about the diameter of a garden hose. At its centre are hair-thin glass fibres, each carrying light pulses that encode data — multiple wavelengths of light per fibre, each wavelength carrying a separate data stream. Modern cables can carry petabits per second across ocean basins.

The cables are laid by specialist ships, buried under the seabed in shallow coastal waters (where anchor strikes and fishing activity pose risks) and left on the ocean floor in the deep sea. At their endpoints are cable landing stations — often nondescript buildings on remote coastlines — where the optical signals are amplified and handed off to terrestrial networks.

Who owns them? Increasingly, technology companies. Google, Meta, Microsoft, and Amazon have either directly funded or wholly own a significant fraction of new submarine cable capacity, because they require so much of it and because owning the cable is cheaper over time than leasing capacity. This has shifted control of physical internet infrastructure from telecoms incumbents toward a small group of technology giants, with significant geopolitical implications.

The strategic chokepoints

Submarine cables congregate at geographic chokepoints — the Suez Canal corridor, the Strait of Malacca, cables between Egypt and Europe — creating potential vulnerability. A 2022 incident saw cables connecting the Tonga Islands severed by an underwater volcanic eruption, cutting the island nation off from most internet connectivity for weeks. Multiple cable cuts in the Red Sea in 2024, attributed to ship anchors in an active conflict zone, disrupted significant traffic between Europe and Asia.


What happens in 200 milliseconds

When you type a URL and press Enter, a remarkable sequence of events unfolds:

DNS lookup (10–50ms): Your browser knows the domain name (say, algea.in) but not the IP address it resolves to. It queries the Domain Name System — a globally distributed database that translates domain names to IP addresses. Your request travels to a DNS resolver (usually run by your ISP or a service like Google's 8.8.8.8), which either has the answer cached or queries a hierarchy of DNS servers until it gets one.

TCP handshake (10–50ms): Your browser and the destination server exchange three messages to establish a connection: a SYN packet from your browser, a SYN-ACK from the server, and an ACK back from you. This three-way handshake takes at least one full round-trip time — hence the term "latency" matters so much. A server on the other side of the planet might add 200ms of round-trip latency simply due to the physics of the speed of light.

TLS negotiation (20–40ms): For HTTPS connections (which is virtually everything now), an additional handshake establishes encryption — exchanging public keys, agreeing on cipher suites, and generating session keys. Modern TLS 1.3 reduced this from two round trips to one, cutting meaningful latency.

HTTP request and response (variable): Your browser sends a GET request for the page. The server processes it and begins sending the response. For a complex page, this involves dozens of subsequent requests — for images, fonts, stylesheets, JavaScript files — many of which can be parallelised.

CDN magic: Most major websites do not serve you content from a single server in one city. They use Content Delivery Networks (CDNs) — companies like Cloudflare, Akamai, and Fastly that maintain thousands of servers in cities worldwide. When you request a webpage from a CDN-served site, you are directed to the nearest server, dramatically reducing latency. Netflix, for instance, has specialised CDN appliances installed directly inside ISP networks; your video data might never leave your city's infrastructure.


BGP: the protocol that runs the internet — and occasionally breaks it

The internet is not one network. It is roughly 80,000 autonomous systems (ASes) — individual networks operated by ISPs, universities, corporations, and governments — each with their own IP address ranges, interconnected in a vast mesh.

How does a packet find its way from your home network to a server in Singapore? Through the Border Gateway Protocol (BGP), which is how autonomous systems announce their address ranges to each other and negotiate paths.

BGP is fundamentally based on trust. When a network announces "I can reach IP addresses X, Y, and Z", other networks believe it and update their routing tables. This trust is the protocol's strength — it allows the internet to adapt dynamically to failures — and its profound weakness.

The 2008 Pakistan Telecom incident

On 24 February 2008, Pakistan Telecom was ordered by the Pakistani government to block YouTube. Its engineers attempted to do so by announcing a more specific route to YouTube's IP addresses within Pakistan — a standard technique. Through a misconfiguration, this announcement leaked into Pakistan Telecom's upstream provider, PCCW in Hong Kong, and from there propagated globally. Within minutes, YouTube traffic from around the world was being routed to Pakistan Telecom, which had no route to YouTube and was simply dropping the packets. YouTube was inaccessible worldwide for approximately two hours.

Nobody hacked anything. No malice was required. A routing table mistake, propagating across a trust-based global protocol, caused a worldwide outage of one of the world's largest websites.

BGP hijacks — malicious versions of the same mechanism — have been used to intercept traffic, spy on communications, and disrupt services. Patches exist (RPKI, or Resource Public Key Infrastructure, cryptographically validates route announcements), but adoption is incomplete and the fundamental architecture remains fragile.


Data centres: where the internet lives

The servers that store and serve internet content live in data centres — large, climate-controlled buildings filled with racks of servers, networking equipment, and the power and cooling infrastructure to keep them running.

The largest data centres consume as much electricity as a small city. Cooling is one of the biggest costs — all that computing generates heat, which must be removed. This is why data centres are disproportionately located near rivers (for cooling water), hydroelectric power (for cheap, clean electricity), and in cold climates (for free cooling). Iceland, with abundant geothermal power and naturally cold air, has attracted significant data centre investment. Meta's data centre in Luleå, Sweden, uses winter air for cooling, reducing energy use significantly.

The concentration of internet infrastructure in a small number of data centres — primarily run by AWS, Google, Microsoft, and a handful of others — creates genuine systemic risk. When AWS's us-east-1 region experiences an outage, it takes down a remarkable fraction of the visible internet with it, because so many services are built on top of it.


What Starlink changes (and what it does not)

Starlink and similar low-Earth-orbit satellite constellations do not challenge undersea cables for bulk intercontinental data transport — cables will remain faster and cheaper for high-volume traffic for the foreseeable future. What Starlink does change is access: providing connectivity to rural areas, remote locations, and regions where terrestrial infrastructure is sparse or controlled by authoritarian governments.

The latency advantage of low-Earth orbit (roughly 550km altitude, versus 36,000km for traditional geostationary satellites) makes Starlink genuinely useful for real-time applications. It is not a wholesale replacement for fibre. It is a meaningful extension of connectivity to the 3 billion people who remain underserved.


The bottom line

The internet is a global packet-switched network running largely over undersea fibre-optic cables, organised into tens of thousands of interconnected autonomous systems, routed by a trust-based protocol called BGP that is powerful and occasionally catastrophically fragile. When you load a webpage, your data is broken into packets, routed through this mesh, served from a nearby CDN node, reassembled, and rendered — all in under a second. The infrastructure underpinning this is simultaneously an extraordinary engineering achievement and a set of critical dependencies that most of the modern economy cannot function without. That most people never think about it is both a testament to how well it works and a reason to understand it better.

A

Admin

Contributing writer at Algea.

More articles →

0 Comments

Team members only — log in to comment.

No comments yet. Be the first!