Quantcast

Newsletters

  • Contact sales

Fly Me to the Moon - Then and Now

By Richard J. Gran, MathWorks

sum99_unar_fig1.gif

A history of the digital autopilot that landed men on the Moon 30 years ago this summer - and a discussion of how it could be designed and implemented today using MATLAB, Simulink, and Stateflow.

As a young boy I always knew I wanted to become an engineer, but what I didn't know was that shortly after graduating from college, I would become involved in one of the greatest engineering projects ever. On September 12, 1962, President John F. Kennedy declared, "We choose to go to the Moon in this decade," and set this nation on an accelerated path to landing a man on the moon. That same year, I became a member of the Grumman Guidance and Control group and, as such, participated on field assignment, in the design of the Lunar Module digital autopilot at MIT Instrumentation Laboratories from 1963 to 1966.

Let's go back 30 years and describe the process used to program the digital control system that would run on the Guidance and Navigation Computer. Many of the procedures invented during this effort are now part of today's software engineering process. Thanks to The MathWorks, this process can be more efficient. The modern approach uses MATLAB®, Simulink®, Stateflow®, and Real-Time Workshop®.

The MATLAB/Simulink models that were developed by Paul Barnard and Richard Gran for this article are available for you to work with. They can be downloaded from www.mathworks.com/company/newsletters/news_notes/sum99/lunar_module/.

The Lunar Module Digital Autopilot Design - 1961 to 1969

The Lunar Module (LM) hardware for the Apollo Space Program was designed and built by Grumman Corp. under contract to NASA. The original autopilot proposed for the LM was an analog system that used a modulator to pulse the reaction control jets on and off. While the control system was analog, the navigation and guidance functions ran on a digital computer that was common across the Command and Service Module (CSM) and the Apollo Booster. The algorithms for guidance and navigation were created by a team at the MIT Instrumentation Laboratories (MIT IL), now called Draper Labs.

Early in the Apollo program, NASA decided that a backup control system should be part of the LM design to enhance the mission reliability. They suggested programming the Guidance and Navigation Computer to serve as a backup control system for the LM. To do this, a design competition among three of the Apollo contractors (Grumman, MIT IL, and BellComm) was begun in early 1963.

The main issues with the digital autopilot design were its computer storage and speed. Roughly 2000 16-bit instructions were allocated to the digital autopilot, and these operations could not interfere with the primary guidance and navigation functions. Among the many problems that had to be solved was the fact that the computer was not designed to process time-critical events. For this reason, there was only one interrupt level and there were no digital-to-analog interfaces. Just to give a feel for how simple this computer was, Table 1 shows the entire set of its operation codes. To implement the digital autopilot, a second interrupt was required. We used the ones-complement structure of the computer, which meant that there was both a positive zero and a negative zero. This allowed a different interrupt to occur when a counter was incremented from a negative number or decremented from a positive number.

The following discussion will describe how this capability was exploited by MIT IL.

sum99_unar_table.gif

Table 1: The Operation Codes (Instructions) Available in the LM Guidance Computer.

The LM Guidance Computer had eight 4-bit (one-byte) instructions that were defined by the octal numbers 0-7. A second set of codes (called extend instructions) were available by wasting one machine cycle to indicate that the next op code was from the extended set. Most of the instructions used more than one 12-microsecond machine cycle. (Yes, this was only a 500 kHz machine, a more than 1000 times slower clock rate than a 500 MHz Pentium 1).

Proposing an Optimal Control System

In 1962, "modern control theory" was still an academic pursuit. There were no textbooks written on optimal control and recent graduates, including me, were not yet versed in state-space methods and certainly had not been exposed to optimal bang-bang control systems. Engineers at MIT IL worked very closely with students at MIT and, as a consequence, were early adopters of state-space modeling and optimal control techniques based on Pontryagin's Maximum Principle. One of the engineers at the Instrumentation labs, George Cherry, proposed using an optimal control system for controlling the vehicle. The unique insight created by using this approach was that the almost perfect knowledge of the dynamics of the rotational motion of a spacecraft in space allowed the control to be done at a very slow sample rate.

At the NASA meeting where members of each design team presented their approach, George Cherry invoked the image of Sir Isaac Newton standing by his side and telling the controller what to do. Needless to say, NASA selected the MIT design, and the decision to select this approach was the right one. The Grumman design required a sample time of .02 seconds or faster, whereas the MIT approach (with Newton's help) only required a sample time of 0.2 seconds (ten times slower than Grumman's design). Since MIT IL needed engineers at the time, a once-in-a-lifetime opportunity came my way to go to Massachusetts on a field assignment for Grumman. I became one quarter of the LM digital autopilot design team. (Yes, for over three years only four people worked on the autopilot.)

The Optimal Controller

The optimal controller developed by Cherry during the design competition was one that minimized a weighted mix of time and fuel. At the time, this theory was described in a book by Athans and Falb 2, which was available in manuscript form. The book was not published until 1966.

sum99_unar_fig2.gif

Fig. 1: The Reaction Control Jet Phase Plane Switching Logic used in the Lunar Module.

Figure 1 shows the logic for firing the reaction control jets programmed into the LM. The parabolas in the figure are the "switch curves" that determine when the reaction jets are turned on and off. To clarify this logic, a typical situation is diagrammed. The rate and the attitude of the LM are measured, and based on this measurement a set of jets are turned on. For the measurement shown at point #1, the decision must be made to fire jets that will give a negative acceleration. The logic for doing this is based on where the measurement is relative to the parabolas shown in the figure. As the jets fire, the trajectory in this "phase plane" is the parabola shown. When this parabola crosses the "off-switch curve" (the lower parabola), the jets are turned off. Since the rate is negative at this point, the attitude drifts to the left at a constant rate, until the trajectory crosses the switch curve for turning the jets on with a positive acceleration. The frequency at which the measurements are made is the difficult part of the design.

The computer speed and storage limits imposed severe constraints. For example, most control systems engineers would implement this controller by looking at the attitude and rate at some fast sample time to decide where in the phase plane the system currently was. Based on this, the jets would be either turned on or off. In fact, this is the way the autopilot was implemented when the space shuttle was designed. However, the computer constraints on the LM meant that this strategy would not work since there was not enough processing power to allow fast sample rates. Enter Newton.

The LM Autopilot Becomes the Primary System

When the MIT IL team proposed their autopilot, they proposed that the control be done by sampling the attitude and rate at a very slow rate. If these measurements were such that the jets needed to be turned on to correct the attitude error, a calculation of the time needed to reach the off-switch curve was computed, and the jets were turned on. The jets were turned off at the appropriate time using a counter in the computer to create an interrupt (hence the reason we needed two levels of interrupt) that would process the turn-off command. This was the fundamental idea that allowed the long sample times. It was accurate because of the low measurement noise and the ability to precisely predict the trajectory in the phase plane. The only uncertainties were the precise value of the acceleration due to variations in jet thrust, imprecise knowledge of the vehicle inertia, and the noise in the measurements. With this scheme, MIT IL was able to demonstrate that the autopilot needed to be sampled at 0.2 seconds (0.1 seconds during the ascent from the moon when the LM inertias were small and the acceleration was large). NASA was so impressed by this structure that they decided not only to implement this LM autopilot, but to make it the primary system and relegate the analog system as the backup.

Coding and Calculating by Hand

Many of the software processes that are part of the tools of the trade for today's software developer did not exist in 1963. As a consequence, many of these procedures had to be invented. These were often self-imposed disciplines that made the designer's life easier. One of the first tasks I was given at MIT IL was to develop logic for selecting the appropriate reaction control jets.

The code represented by the flow chart in Fig. 2, for example, shows self-imposed disciplines that I used to develop this logic. Each path in the code was timed by hand and both the number of instructions executed and the timing for each branch were calculated using the nominal cycle times for each instruction. In addition, each of the interrupt processing task-timing numbers was also calculated by hand. The flow chart in Fig. 2 is a part of the actual computer code in assembly language that took over a year to develop. This was only a single step in the overall process.

sum99_unar_fig3.gif

Fig. 2: Original flowchart from 1966 that shows the software design for a piece of the jet select logic code written in assembly language.

The control system design was developed and tested using a simulation that was written in Fortran at Grumman and in a language called MAC at MIT. Once the design was frozen, the assembly language code was written. This code was then tested in a simulation that also emulated the actual computer. This simulation used the actual assembly language code. To understand how cumbersome the process was, a single "computer run" took half a day. I would typically submit a run in the late afternoon (using IBM cards) and get the results back at 3:00 in the morning. Often, I would get up in the middle of the night and walk from my hotel to MIT IL to fix errors. I would then resubmit the run so that a new set of output would be available later in the morning. The results were always in the form of a ten-inch-thick stack of paper with the results of the calculations at each step in the execution of the code.

One reason why the code segment in Fig. 2 was so complex was that the number of jets that could be used to control the rotations about the pilot axes was large (see Fig. 3). A decision was made to change the axes that the autopilot was controlling to the "jet axes" shown in Fig. 3. This made a dramatic change in the number of lines of code and in the ability to program the autopilot in the existing computer. Without this improvement, it would have been impossible to have the autopilot use only 2000 words of storage. The lesson of this change is that when engineers are given the opportunity to code the computer with the system they are designing, they can often modify the design to greatly improve the code. These are changes that programmers would never suggest, since they only code what was written in a code specification. But with MATLAB, Simulink, and Stateflow, the design engineer can also be the one who codes the design (using Real-Time Workshop), and the gap between designer and coder is reduced.

sum99_unar_fig4.gif

Fig. 3: The 16 reaction jets on the Lunar Module as they were positioned relative to the pilot.

How Would We Do The Lunar Module Digital Autopilot Today?

With today's tools we can analyze, design, simulate, and test a system as complex as the LM much more efficiently and completely. Tools from The MathWorks are ideal for this task. The top level of the model shown in Fig. 4 took about 1 hour to create. It represents the three-degree-of-freedom rotational motion of the LM.

sum99_unar_fig5.gif

Fig. 4: Top-level Simulink diagram of the Lunar Module Digital Autopilot. The LM rotational dynamics was modeled in this diagram in about 1 hour, a fraction of the time it took for the original design. This diagram can serve multiple functions, including simulation, analysis, and code generation. It is truly an executable specification of the LM Autopilot.

Technical Computing Environment

Because of the unique design challenges presented by the LM autopilot, a powerful computing environment was required. In the 1960's we had to build this environment. Today, MATLAB can be used as the base environment. Design parameters, controller specifications, and analysis routines are all efficiently handled through MATLAB. As an example, all of the design parameters were stored as MATLAB workspace variables, making them easily accessible from within Simulink, Stateflow, and Real-Time Workshop (Fig. 5).

sum99_unar_fig6.gif

Fig. 5: For this comparison, all the parameters were stored in MATLAB, making them available in Simulink, Stateflow, and Real-Time Workshop. The MATLAB Workspace Browser gives you quick access to all variables in the MATLAB workspace and can work as a data repository for analysis and simulation with Simulink and Stateflow.

MATLAB can also be used to create an animated analysis plot for the phase plane in which the autopilot operated. You can watch this plot as the simulation is running to verify that the autopilot is firing the jets as designed (Fig. 6).

sum99_unar_fig7.gif

Fig. 6: Animated MATLAB phase-plane plot for the Lunar Module. Using MATLAB's graphics, you can create custom animations that are driven by data in Simulink and Stateflow during a simulation.

Top-Level Simulink Diagram of the LM Digital Autopilot

Simulink version 3.0 contains numerous features that enable teamwork by a group of engineers. Features such as the library and model browsers, configurable subsystems, and version management tools facilitate the design of larger and more complex systems. Although the LM was a complex machine for its time, the systems being designed and built with Simulink today are much more complex.

The LM digital autopilot is a subsystem in Fig. 4. The autopilot was developed in a fraction of the time it took for the original design. This is a result of the multiple functions, including simulation, analysis, and code generation, that Simulink provides. The diagram shown in Fig. 4 is truly an executable specification of the LM autopilot.

Reaction Jet Control System of the LM in Simulink

The reaction jet control (see Fig. 7) is performed in the Simulink subsystem called "Reaction Jet Control" and is stored in a Simulink Library. Libraries allow you to create reusable components for other engineers and analysts to work with. The original guidance and control group had approximately 30 engineers who needed to work in a group environment sharing models and ideas.

sum99_unar_fig8.gif

Fig. 7: Reaction jet control system of the Lunar Module in Simulink. This system is modeled as only one component of the complete system diagram for the LM autopilot. Simulink allows componentization of subsystems which helps large numbers of engineers work together on projects simultaneously.

Using Simulink, we might need a significant staff of engineers that require access to portions of the model at different stages in the project. With library components, engineers can work separately on different pieces of the model, be they the plant dynamics models or the controllers. Then, the entire system can be brought together for overall system simulation.

Stateflow and Simulink Together Provide Complete System Modeling

A critical requirement of complex system-level design is the ability to accurately model and simulate reactive systems. Tightly integrated with Simulink, Stateflow provides engineers and designers with a solution for designing embedded systems by giving them an efficient way to incorporate complex control and supervisory logic within their Simulink models.

The Stateflow diagram (see Fig. 8) shows the logic that implements the phase-plane control algorithm described earlier (see Fig. 3). Depending on which region of the diagram the LM is executing, the Stateflow diagram will be in either a Fire_region or a Coast_region. Note the transitions between these different regions depend on certain parameters describing the arcs. These mathematical arcs are actually calculated in the Simulink portions of the model and passed to the Stateflow diagram. The Stateflow diagram determines whether or not to transition to another state and then computes which reaction jets to fire.

sum99_unar_fig9.gif

Fig. 8: Stateflow and Simulink together provide complete system modeling. The Yaw controller for the LM relies on the combined power of Simulink and Stateflow for modeling complex dynamic and reactive systems in the same environment.

System Integration, Testing, and Deployment

With initial testing and verification completed, the next step in the development process is to verify performance through testing the software on the target hardware and running more complete system simulations. MIT IL, Grumman, and NASA had hardware-in-the-loop (HIL) simulations of the LM. These simulations were expensive and took many years to develop and implement. Grumman's simulation, for example, consisted of three floors in a building that covered over 12,000 square feet. The first floor was a complete mock-up of the LM cockpit, a motion base that had the inertial measurement system and other hardware that measured motions. The second floor of the building had an analog computer that simulated the rotational motion of the LM, while the third floor had a dedicated IBM 7090 computer (the best available at the time) to simulate trajectory, guidance, and navigation.

Today, using Real-Time Workshop, the LM project could implement HIL simulation with far fewer resources. Production versions of actuators, sensors, and other physical systems could interface with the real-time simulation of the software. Where physical components are impractical, Simulink models that mimic real-life measurement, system dynamics, and actuation signals could be used. In either case, Real-Time Workshop can generate code in appropriate formats. In addition, the customizable code formats allow for connecting inputs, outputs, and parameters to existing code where necessary.

Full System Testing

Real-Time Workshop's rapid simulation target also aids the testing and deployment of the LM system. It lets us run many different simulation runs with multiple parameter sets for this simulation. In the LM example, the knowledge of the plant dynamics was very good due to the lack of external disturbances to the spacecraft. However, the precise value of the acceleration due to variations in the jet thrust, the imprecise knowledge of the inertia and noise in the measurements are variables that needed to be analyzed. Monte Carlo analysis through the rapid simulation target is an ideal way to verify performance over the entire range of these uncertainties.

Embedded Code Generation

Unlike the labor-intensive way the code was developed for the original LM, the combination of MATLAB, Simulink, Stateflow, and Real-Time Workshop allows code to be rapidly developed. Thus, a team of engineers can work together to iteratively develop a good design.

The complete Simulink and Stateflow diagram for the LM autopilot affords us full system simulation. For code that is deployed, we only need the model hierarchy in the Reaction Jet Control subsystem and below. These sections of the model contain the behavior requirements for the embedded digital control system. Real-Time Workshop generates highly efficient embedded code for processors with stringent memory and resource requirements (Fig. 9).

sum99_unar_fig10.gif

Fig. 9: This sample Ada code from the reaction jet control timer was automatically generated in Simulink. Using Real-Time Workshop, embedded C or Ada code is automatically generated, allowing more time for design refinement.

Summary

Working on the design of the Lunar Module digital autopilot was the highlight of my career as an engineer. When Neil Armstrong stepped off the LM onto the moon's surface, every engineer who contributed to the Apollo program felt a sense of pride and accomplishment. We had succeeded in our goal. We had developed technology that never existed before, and through hard work and meticulous attention to detail, we had created a system that worked flawlessly.

Recreating the digital autopilot design using the MathWorks tools brought back a lot of the memories of the struggles we went through to create the original design. It also emphasized how much better the design process is today: computer performance is orders of magnitude better; designing a system with MATLAB, Simulink, Stateflow, and the toolboxes is far easier. A surprising attribute of today's process is the tight integration of conceptualizing and computing. It was possible for me to redo the entire LM digital autopilot design in about a week because I was able to conceptualize an approach and immediately see if the idea had merit. The analysis, simulation, and testing blend together into a seamless procedure. This, in my mind, is the power of the MathWorks family of tools.

Published 1999

Receive the latest MATLAB and Simulink technical articles.

Related Resources

Latest Blogs