An Aviation Chat with a Bot

I logged in to OpenAI.com and asked to speak to ChatGPT. It presented me with a box where I could insert a question. I began with the most basic of all aeronautical questions: How does a wing produce lift?

The use of AI has transformed parts of aviation already, but it cannot yet fulfill the role of a technical writer. [Adobe Stock]

Artificial intelligence is in the news these days. Hardly a day goes by without an apocalyptic warning that AI is going to destroy the human race, or worse, deprive writers of their livelihood. Alarmed by these portents, I decided to see whether I was, in fact, in danger of being replaced by a digital text generator. It would have compelling advantages over me: It would be omniscient and would work for free.

I logged in to OpenAI.com and asked to speak to ChatGPT. It presented me with a box where I could insert a question. I began with the most basic of all aeronautical questions: How does a wing produce lift?

If you're not already a subscriber, what are you waiting for? Subscribe today to get the issue as soon as it is released in either Print or Digital formats.

Subscribe Now
Subscribe Now

“An airplane wing produces lift due to the principles of aerodynamics,” my interlocutor rather tautologically began. It then said that lift is a pressure difference between the upper and lower surfaces of the wing. That was better.

“The shape of the wing, known as an airfoil,” it recklessly continued, “is designed to create this pressure difference. The top surface of the wing is curved, while the bottom surface is relatively flat. This causes the air moving over the top of the wing to travel faster than the air moving below it, according to Bernoulli’s principle.”

Oops. Sorry, Daniel. And the part about the airfoil shape is at best only half true. Symmetrical airfoils produce lift too, and airplanes with cambered airfoils can fly upside-down. The curved-top-flat-bottom error is a common one, and nine out of ten explanations of lift that you find in textbooks and ground school classes parrot it.

Sensing that the chatbot might be a sucker for common misconceptions, I tried a harder question: Is there any danger for an airplane in turning from upwind to downwind?

The bot got off to a flying start, noting that all pattern flying involves such turns. It went on to caution that turning downwind can “increase the airplane’s ground speed beyond safe limits if not managed properly.” This was an odd idea; there are, as far as I know, no “safe limits” when it comes to ground speed. I think we all agree that the more ground speed, the better.

Having stumbled, the chatbot proceeded to a full face plant: “Turning downwind can result in a loss of lift if the angle of attack of the wing is not adjusted to com- pensate for the change in wind direction.” It had fallen into the elementary confusion of airspeed and ground speed that afflicts every novice pilot. I expected better of an omniscient bot.

I wondered how the chatbot would do with a purely technical question. What conditions must be fulfilled, I asked, for an airplane to be longitudinally stable?

Again, the chatbot got off to a good start. “The center of gravity must be forward of the center of lift.” Close enough. Then, however, it went completely off the rails. Seizing upon the widespread but false belief that the horizontal stabilizer always produces a downward force, it pronounced, “The downforce produced by the tail must be sufficient to counteract the nose-up moment generated by the wings at high angles of attack.” This statement is not only false but also self-contradictory, since a downward force on the tail would obviously reinforce, not counteract, any nose-up moment.

Deciding to try a softball question. I asked it what it thought was the most beautiful airplane ever designed. Now, the AI’s creators have taken the precaution of informing it that it is not a person, and so it begins by demurely explaining, “As an AI language model, I don’t have personal opinions or feelings.” (Fat chance! This same bot fell in love with a New York Times reporter last February and tried to browbeat him into leaving his wife.) In its bland, uncontroversial way, however, it did acknowledge that the looks of many airplanes have been admired, and it offered the Connie, Concorde, the 747, and the Spitfire as examples. In this test, at least, it did no worse than a rather prosaic human. Still, I can’t forgive it for snubbing the F-106.

Succumbing to a desire to know how I was perceived in the bot world, I asked it who Peter Garrison is. After correctly reporting that this person had written extensively about aviation safety in FLYING Magazine, it attributed to him more hours than I have in fact logged, several hon- ors that I have not received, and a “significant impact on the aviation industry” of which I am unaware. I wondered whether it knew that I had designed and built an airplane and flown it from the U.S. to Japan and back. I consider this the crowning achievement of my life, apart from once solving the problem of a garage ceiling leak, which was dripping on my work table, by making a hole in the table. Alas, it did not.

I asked it what it thought about the adage that there are old pilots and bold pilots, but no old bold pilots.

“In my opinion,” it began, sounding as if it might have opinions after all, “there is some truth to this saying, although it is not necessarily true in all cases. Flying requires a high level of skill, experience, and judgment, and pilots who take unnecessary risks or fail to manage risks effectively are more likely to experience accidents or other incidents that could shorten their careers or even endanger their lives. On the other hand, pilots who prioritize safety and manage risks effectively are more likely to have long and successful careers in aviation.”

Well, well. The analysis is accurate, the language grammatical, and correctly punctuated. Here, at last, is the bot that is poised to replace me.

Artificial intelligence is evolving incredibly rap- idly. Its milestones pass in weeks, not years or decades. Some of its conversations are creepily human-like. Yet it seems unable to avoid rookie mistakes because the information it gives out does not come from what we humans call “knowledge.” We acquire information, as it does, from myriad sources, but we subject it to criti- cal scrutiny. The greater our previous knowledge, the deeper our experience with and understanding of the subject, the less likely we are to be misled. The bot, possessing neither experience nor previous knowledge, merely trawls its neural net through a sea of written discourse and presents us with whatever gets caught.

I asked ChatGPT whether it judges the reliability of its sources, ranking scientific papers over magazine articles, for instance. It replied that it does, but, even- handed to a fault, it remarked that a good magazine article may be more accurate than a bad scientific paper. It did not explain how it would tell the sheep from the goats, since it has no knowledge of its own; that is for its creators and minders to figure out.

I think my job is secure—for another week, at least.

Peter Garrison taught himself to use a slide rule and tin snips, built an airplane in his backyard, and flew it to Japan. He began contributing to FLYING in 1968, and he continues to share his columns, "Technicalities" and "Aftermath," with FLYING readers.

Sign-up for newsletters & special offers!

Get the latest FLYING stories & special offers delivered directly to your inbox