Tesla’s story in self-driving is a mix of real technical progress, bold marketing, and a moving finish line. On one hand, Full Self-Driving (FSD) has evolved into a system that can handle complex navigation—turns, merges, lane changes, intersections, parking maneuvers—99% of the time with startling competence. On the other hand, Tesla itself is explicit that today’s product is not autonomous: it requires active driver supervision and does not make the car self-driving in the legal or technical sense. (Tesla)
Meanwhile, “Robotaxi” is the bigger promise: cars that don’t just help a driver, but replace the driver—turning vehicles into revenue-generating autonomous fleets. That leap is not merely incremental. It’s a jump across technology, regulation, safety validation, business operations, insurance, and public trust. This article explains what Tesla’s FSD really is today, how it works at a high level, what “Robotaxi” requires that FSD doesn’t yet deliver, and why the next phase will be harder than many people expect.
1) What Tesla FSD is today (and what it is not)
Tesla currently sells Full Self-Driving (Supervised). Tesla describes it as a system that can drive you “almost anywhere” under your supervision, and Tesla emphasizes that enabled features require active driver supervision and “do not make the vehicle autonomous.” (Tesla)
Regulators largely categorize this as SAE Level 2 driver assistance, meaning the system can control steering and speed in certain conditions, but the human driver remains responsible and must continuously supervise. NHTSA’s automation-level descriptions make that distinction clear: Level 2 still expects the driver to monitor the environment and be ready to take over immediately. (NHTSA)
This matters because “self-driving” is not one thing—it’s a ladder:
Level 2 (driver assistance): the human supervises everything.
Level 4 (true robotaxi in a defined area): the system drives itself within an Operational Design Domain (ODD)—for example, specific cities, geofenced neighborhoods, certain weather limits—without expecting a human to watch the road.
Level 5 (anywhere, anytime): full autonomy in all conditions.
Tesla’s consumer FSD today is still, by the company’s own characterization and by regulatory framing, on the Level 2 rung. (NHTSA)
2) How Tesla’s approach differs: “vision-first” and fleet learning
Tesla’s technical strategy has been distinctive: heavy reliance on cameras and neural networks, with a philosophy that the best path to scalable autonomy is to solve driving the way humans do—primarily through vision—then scale via software and data.
Over the last several years, Tesla moved further toward “Tesla Vision.” Tesla has published support material describing the transition away from certain non-vision sensors, including the removal of ultrasonic sensors (USS) from vehicles and the shift to camera-based replacements for some features. (Tesla)
(Separately, multiple automotive outlets documented Tesla’s earlier move toward camera-only for certain models/markets by removing radar, as part of the broader “Tesla Vision” shift.) (The Drive)
The upside of this approach is scalability: millions of cars can collect real-world driving data, and Tesla can iterate quickly via over-the-air updates. The downside is that vision-only autonomy has to be extraordinarily robust in the messy corners of reality: glare, heavy rain, occlusions, odd construction layouts, faded markings, emergency scenes, human gestures, and rare-but-critical edge cases.