Changes in OrdinaryDiffEq.jl v3

Hi!

I know, it has been a long, long time since my last post. I would like to apologize! I have been very busy due to a massive amount of work to be done at INPE.

Anyway, this will be a very small post. I just want to update the source code of the last lesson so that it can work under Julia v0.6 and OrdinaryDiffEq.jl v3.

As I said, the package OrdinaryDiffEq.jl, which is part of the project JuliaDiffEq, is amazing. In my humble opinion, one of the best open-source differential equation solvers out there. The development of this suite is occurring at a very fast rate. This is good, because the package is becoming better each day. The downside is that API changes also very fast and sometimes your code needs refactoring to continue to work in the newer versions.

Necessary background

This is just an update of the source code presented in the last post. Hence, I highly suggest to read it first:

https://www.ronanarraes.com/2017/02/extending-the-ode-solutions-in-julia-by-creating-custom-data-arrays-for-the-simulations/

Changes in OrdinaryDiffEq.jl v3

Since I posted the last tutorial, OrdinaryDiffEq.jl API has changed (actually, the API is defined in DiffEqBase.jl, but it will be updated when you update OrdinaryDiffEq.jl). There are three modifications that we must perform so that the previous source code works correctly with Julia v0.6 and OrdinaryDiffEq.jl v3.0.3.

Custom Data Array Interface

Previously, a custom data array interface could be created by means of a new structure that inherits the abstract type DEDataArray{T}. This have changed and now the correct abstract type is DEDataVector{T}. Hence, our SimWorkspace definition must be:

Dynamic equation footprint

In the version used for the last tutorial, the footprint of the dynamic function was:

Due to some internal changes, the footprint now is:

p is a vector of parameters that is unused in this example, but simplifies a lot of things. If you want more information, please see http://docs.juliadiffeq.org/stable/tutorials/ode_example.html#Defining-Parameterized-Functions-1.

Callback condition function footprint

The callback condition function footprint was also changed from:

to:

Updated source code

Finally, the updated source-code of the last tutorial can be seen below. It will provide exactly the same result.

Conclusions

This tutorial just updated the source code of the last one so that it can work using Julia v0.6 and OrdinaryDiffEq.jl v3.0.3.

I am planning for my next tutorial a simple example of how to code Kalman filters in Julia language!

Extending the ODE Solutions in Julia by Creating Custom Data Arrays for the Simulations

NOTE: The source code of this tutorial will not work under the latest versions of Julia and OrdinaryDiffEq.jl. You can see an updated version of the source code here.

Hi!

In this new post, I will show how we can create a Data Array to extend the possibilities of the julia package OrdinaryDiffEq to simulate dynamic systems.

Necessary background

This tutorial provides additional information to solve a problem introduced by my last tutorial, then it is very important to read it before:

https://www.ronanarraes.com/2017/02/using-julia-to-simulate-systems-composed-of-continuous-and-discrete-parts/

Problem definition

In my last post, I showed how I use the julia package OrdinaryDiffEq to simulate control systems that have continuous and discrete parts. However, there were two big problems:

  1. I needed to use a global variable to pass the information from the discrete part of the system to the dynamic function (in that case, the information was the acceleration);
  2. I would have to do some very bad hacks to plot the computed acceleration (creating a global vector to store the computed values).

As I mentioned, the optimal solution to circumvent those problems is was somewhat difficult to achieve. Fortunately, Chris Rackauckas, which is the maintainer of OrdinaryDiffEq, introduced the concept of Data Arrays, which greatly simplified the solution for those problems. Notice that you will need v1.4.0 or higher of OrdinaryDiffEq in order to have access to the new Data Array interface.

In this tutorial, the problem we want to solve is: how to create custom parameters that the discrete part of the system can modify and that will influence the dynamic function?

A little history

When I started to use OrdinaryDiffEq to simulate the attitude and orbit control subsystem of Amazonia-1 satellite, I saw that I needed two kinds of variables:

  1. The ones that compose the state-vector and that will be integrated by the solver; and
  2. Some parameters that the control loop can modify and will influence the dynamic solution, like the commanded torque for the reaction wheels and magneto-torquers.

However, by that time, there was no easy way to achieve this. The only method is to create a custom type in julia to store the parameters that are external to the state-vector. The problem is that you needed to implement yourself many methods in order to make this new type compatible with the OrdinaryDiffEq interface. In the following, you can see what you had to do so that you can define a parameter called f1. This code was used in this issue opened by me at GitHub: https://github.com/JuliaDiffEq/DifferentialEquations.jl/issues/117

Old way to create a custom type in OrdinaryDiffEq

Hence, Chris Rackauckas decided to create a new interface inside the OrdinaryDiffEq package so that you can easily build your own custom types to be used by the solver.

Note: Actually the definition of the new interface is placed at the package DiffEqBase, which is a dependency of OrdinaryDiffEq.

Data Array interface

This interface allows you to define your own types to be used by the solver. In order to do this, you just need to define a type that inherits DEDataArray:

where there must be an x array that defines the state-vector and any other parameter that you will need in your simulation. Given that, you can access any of those parameters inside the solver and callback functions using u.a, u.b, etc. You can also access the i-th element of the state vector using u[i]. In other words, you do not need to use the cumbersome notation u.x[i].

This pretty much replaces all the code I showed before 👍.

Solving our problem

Hence, we now can remove the global variable a (🎉🎊🎉🎊🎉🎊) by defining the following type:

Inside our dynamic function, we can now get the commanded acceleration using u.a:

Now, we just need to modify the affect function of our callback to write on this variable:

You might have noticed that there is a strange for here. This is a problem created by the internal structure of the ordinary differential equation solvers. For example, 4-th order Runge-Kutta algorithm needs to use four caches called \(k_1\), \(k_2\), \(k_3\), and \(k_4\), which are essentially the dynamic function computed at different space-state points, that are used to obtain the solution at the next sampling step. In this case, the parameter a must be set in all those caches to provide the correct solution.  This is precisely what that for is doing. It is looping for all caches in the solver, and correctly setting the parameter a with the acceleration computed by this callback affect function. Notice that the current state-vector integrator.u is inside the list user_cache(integrator) and, consequently,  is also being set inside that loop. The good thing is that by using this for, you do not have to worry about how many caches the selected solver has.

Finally, we just need to create our initial state using the SimWorkspace type:

where the first parameter is initializing the state-vector with zeros (position and velocity), and the second parameter is initializing the acceleration. After the execution, we can access the commanded acceleration at each saved time step using sol[i].a.

If you are using PyPlot, then the state-vector and the acceleration can be plot using:

which should provide you the following result:

If you would like to change the initial state, then you just need to modify u0 before calling the solver:

or

which will provide you the following result:

Conclusions

In this tutorial, I showed how you can use the Data Array interface of package OrdinaryDiffEq to create your own custom simulation workspaces. Hence, now you know how to create parameters to be used inside the solver that does not belong to the state-space vector.

I provided an example by extending the solution presented in a previous tutorial, where we needed a global variable to handle this kind of parameters. Notice that, although the presented example was very simple, this concept can be easily extended to provide you an easy, elegant way to simulate much more complex systems.

If I was not very clear, please, let me know in the comments! I will use all your advice to improve this and my future tutorials!

In the following, you can find the source code of this tutorial.

Source code

Edit (2017-02-28): Grammar corrections and minor text improvements.

Using julia to simulate systems composed of continuous and discrete parts

NOTE: The source code of this tutorial will not work under the latest versions of Julia and OrdinaryDiffEq.jl. You can see an updated version of the source code here. Notice that the theory remains the same!

Hi!

In this first post, I would like to explain how I circumvent a problem I always had related to simulation of dynamic systems that has discrete and continuous parts. Notice that this is for sure MathWorks® Simulink territory. There is no doubt that Simulink is the faster prototyping tool for such kind of applications in which you have a continuous dynamic model modified by discrete parts that are sampled at specific time intervals.

However, sometimes there are situations in which you cannot use Simulink (not having a license being the most obvious one…). In my specific use case, I am coding the Attitude and Orbit Control Subsystem (AOCS) simulator for the Amazonia-1 Satellite. Simulink gave me some problems related to simulation of some discontinuities, like shadows of some satellite parts on the coarse solar sensors. It turns out that I needed more control in the solver than Simulink could offer me. Hence, I decided to port the simulator to a “standard” language. After a very long research, julia and the package OrdinaryDiffEq provided me the best solution to simulate my system. Hence, in this post, I would like to share how I was able to solve the problem of simulating systems with discrete and continuous parts using these tools.

Necessary background

I will try to keep this tutorial as simple as possible. However, it would be helpful if you have a minimum knowledge about:

  1. Julia language;
  2. Control systems; and
  3. Ordinary Differential Equations.

Problem definition

In most control systems, you have four parts:

  1. The plant, which is what you want to control;
  2. The sensors, which will measure quantities related to the processes you want to control;
  3. The actuators, which will act and modify the state of your plant; and
  4. The controller, which will read the sensor outputs and command the actuators to modify the plant given a set of specifications.

However, nowadays the controller is usually embedded on a microcomputer, which has predefined intervals to execute its functions. In other words, only on specific time instants (called sampling steps) the controller will read the sensor output, update the control law, and send new commands to the actuators.

In order to simulate such system, you have to consider two kinds of models: continuous and discrete. The plant, for example, is a continuous model, since it is usually modeled by a set of differential equations and generally varies for every t. The controller, as mentioned before, only updates its internal state on a determined set of time instants.

Academically speaking, the simulation of such system can be solved by using the Z-transform to convert the plant continuous model to a discrete one. Hence, every model will be discrete and can be simulated easily. This method has many problems. The biggest one, in my opinion, is that it only works for linear systems and it can be very difficult to be used in complex plants with many state variables.

Hence, the problem is: how can I use a “standard” language to simulate a system in which you have a continuous part (the plant) that is controlled by a discrete process (the controller)? I will provide in this post one possible solution using julia and the package OrdinaryDiffEq.

Proposed dynamic system

I decided to propose a very simple dynamic system to be controlled. This will help to make the tutorial more interesting and easy to understand. The problem I selected is a simple particle control in one dimension defined as follows:

Control the position of a particle that moves in one dimension. You can change the particle acceleration and you have access to the position and velocity measurements.

This particle can be modeled by a set of two differential equations describing the time-derivative of its position and velocity:

\(\begin{array}{ccc} \dot{p} &=& v \\ \dot{v} &=& a \end{array} ,\)

where \(p \) is the particle position, \(v \) is the particle velocity, and \(a \) is the acceleration, which we can control.

Hence, we can define the state vector as:

\(\mathbf{x} = \left[\begin{array}{c} p \\ v \end{array}\right] . \)

Finally, our dynamic system can be written as:

\(\dot{\mathbf{x}} = \left[\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right]\mathbf{x} + \left[\begin{array}{c} 0 \\ 1 \end{array}\right] \cdot a . \)

Note: This is precisely one of those systems that can be easily simulated using the Z-transform approach when you have a discrete controller. But the idea is to keep this tutorial simple so that you can understand the idea to extend to more complex scenarios.

Controller

We want to control the acceleration of a particle so that it goes to a specific point in space, called \(r \). In this case, I will propose a (very) simple PD controller. The control law will be:

\(a = 0.3 \cdot (r-p) + 0.8 \cdot (0.0-v) \)

The controller must be a discrete process that is updated every 1s.

Note: For those unfamiliar with control systems, just assume that by computing the acceleration as shown here we can drive the particle to the desired position \(r \).

In the next sections, I will describe how you can use julia and the package OrdinaryDiffEq to simulate this system in a very elegant solution. You will see that you can easily extend this idea to handle much more complex systems.

Why did I choose julia?!

At my institute, INPE, I have to answer almost on a daily basis why I am using julia as the language to do my work. It is kind difficult to introduce new technologies especially when you are dealing with areas that people still use FORTRAN due to all those legacy codes.

I started to use julia since version 0.2 (we are currently on 0.5). The reasons why I really like this language are:

  1. It seems an interpreted language in the sense that you do not need to mind about the types of the variables, which makes a MATLAB programmer very comfortable;
  2. You can very easily call C / FORTRAN functions inside your julia code, this is important since models like IGRF and MSIS are made available in FORTRAN;
  3. The performance of the simulations when vectorization is impossible (or very difficult) is much better than other interpreted languages (like MATLAB, Octave, Python, etc.);
  4. You don’t need to mind to vectorize your code;
  5. Parallel programming is now very easy to achieve; and
  6. The syntax if very elegant (ok, I know, this is a very personal opinion 🙂 ).

Of course, there are many other good points about julia, please visit the website http://www.julialang.org for more information.

Furthermore, for the kind of application I am focusing on this tutorial, the package OrdinaryDiffEq developed and maintained by Chris Rackauckas is a game changer. Chris developed an amazing interface to interact with the solvers that let you do almost everything. Conclusion: you have all the solvers you need (Runge-Kutta, Dormand-Prince, etc.) with the power to interact deeply with the algorithm to adapt it to your specific needs. I have to admit that I am not aware of any other package that provides such elegant solution to any other language (please, let me know in the comments if I missed something!).

Setup

First, download the appropriate julia package for you operational system at http://www.julialang.org (notice that I tested everything here using openSUSE Tumbleweed and julia 0.5).

After installation, open julia and install the OrdinaryDiffEq:

For more information about those commands to install the package, please take a look at the documentation here: http://docs.julialang.org/en/stable/manual/packages/

Simulating the system

The first thing is to load the modules we need:

Solving an ODE using OrdinaryDiffEq is very like to other solvers available in MATLAB, e.g. ode45. You need to define your dynamic function, set the initial state, adjust some options, and call the solver. Hence, let’s define our dynamic function.

In OrdinaryDiffEq, the dynamic function must have the following footprint:

where t is the time, u is the state-vector, and du is the time-derivative of the state vector. Notice that u and du have the same dimension of the initial state. This function must compute du, but the user must be careful to not change the vector reference! Otherwise, the solver will not use the computed value.

For our proposed problem, the dynamic function will be:

Note 1: In this example, the acceleration variable will be global. This will make things easier, but it is a very (VERY) bad programming standard. However, the right way to avoid this global variable is a little bit more complicated and will be handled in a future tutorial.

Note 2: The .= sign means that the values on the left-side will be copied to the vector du without changing the reference.

Ok, the dynamic system model (the continuous part of our problem) is done! In the next, we will code our discrete control law.

Discrete controller using Callbacks

Callbacks are one type of interface with the solver that makes OrdinaryDiffEq package powerful. You define two functions: one to check if an event occurred (condition function), and other to be executed in case this event has happened (affect function). Hence, at every sampling step, the solver calls the condition function and, if it tells that the event has occurred, then the affect function is executed.

OrdinaryDiffEq provides two kinds of Callbacks: Continuous and Discrete. In this case, we are interested in discrete callbacks in which we can precisely define prior to the execution what are the sampling steps that the affect function will be called. For more information, please see the documentation at http://docs.juliadiffeq.org/latest/features/callback_functions.html

Let’s define first the condition function that will check if we are on a sampling interval in which the control law must be updated:

The variable tf defines the simulation time, and the array tstops defines the sampling intervals of the control loop. The function condition_control_loop, our condition function, simply checks if t is one of our sampling instants. If this function return true, then the affect function will be called. Notice that it is also possible to define some kind of callback based on the state-vector u, but it is not our case.

Don’t mind with the integrator variable. It is basically an interface with the integrator that lets you check and change many, many things. I will provide more information in future tutorials. If you want to know more about it, check the documentation at http://docs.juliadiffeq.org/latest/basics/integrator.html

Now, let’s define our affect function that, in this case, is our controller:

where r is the reference position, and k and d are respectively the proportional and derivative terms of the PD controller. In this case, the controller loop only needs to compute the new acceleration, which is written in the global variable 😕 a, using the current position and velocity of the particle.

Now, we can define the callback:

Calling the solver

Finally, we just need to define our initial state and call the solver of OrdinaryDiffEq package to solve the problem for us:

Tsit5() means that we are using the Tsitouras 5/4 Runge-Kutta method. Other possible options are DP5() for Dormand-Prince’s 5/4 Runge-Kutta method, or BS3() for Bogacki-Shampine 3/2 method. All these methods have adaptive sampling steps, which is a good choice to simulate dynamic systems, but you can also use methods with fixed steps. For more information, see the documentation at http://docs.juliadiffeq.org/latest/solvers/ode_solve.html

After that, the solution of the problem will be stored in the variable sol. The time vector containing all sampling steps accepted by the solver is sol.t and the state vectors on each of these instants are stored in sol.u.

If you are using PyPlot, you can plot the solution this way:

The expected result of this simulation is:

Notice that you can easily see the points in which the control loop is computed, which immediately change the velocity of the particle.

Conclusion

In this first tutorial, I showed how I use julia to simulate systems that have continuous and discrete parts. The concept was introduced by a very simple problem in which the position of a particle must be controlled using a discrete control loop.

I hope you could understand well the concept and how easy it will be to extend it to much more complex simulations. If I was not very clear, please, let me know in the comments! I will use all your advice to improve this and my future tutorials!

In the following, you can find the source code of this tutorial.

Source code

  • Edit (2017-02-26): Grammar corrections.

Introducing… my new website!

Hi!

I finally decided to move my website from INPE’s server (http://www.inpe.br/ete/dse/ronan) to my own hosted server. This happened because I needed more freedom to install a CMS, which it is not currently possible at my old server.

Today, I finally finished the setup, and now you can wait for some posts in the coming weeks about my job at INPE (the National Institute for Space Research in Brazil). Furthermore, I will try to provide tutorials about modeling and simulation of space systems, which is currently one of my biggest responsibilities, as well as information related to signal processing, which was the area of my Doctorate.

I hope you like! Stay tuned 😉