Some Fun With Julia Types: Symbolic Expressions in the ODE Solver


In Julia, you can naturally write generic algorithms which work on any type which has specific “actions”. For example, an “AbstractArray” is a type which has a specific set of functions implemented. This means that in any generically-written algorithm that wants an array, you can give it an AbstractArray and it will “just work”. This kind of abstraction makes it easy to write a simple algorithm and then use that same exact code for other purposes. For example, distributed computing can be done by just passing in a DistributedArray, and the algorithm can be accomplished on the GPU by using a GPUArrays. Because Julia’s functions will auto-specialize on the types you give it, Julia automatically makes efficient versions specifically for the types you pass in which, at compile-time, strips away the costs of the abstraction.

This means that one way to come up with new efficient algorithms in Julia is to simply pass a new type to an existing algorithm. Does this kind of stuff really work? Well, I wanted to show that you can push this to absurd places by showing one of my latest experiments.

Note

The ODE solver part of this post won’t work on release until some tags go through (I’d say at least in a week?). The fix that I had to do was make the “internal instability checks”, which normally default to checking if the number is NaN, instead always be false for numbers which aren’t AbstractFloats (meaning, no matter what don’t halt integration due to instability). Do you understand why that would be a problem when trying to use a symbolic expression? This is a good question to keep in mind while reading.

Symbolics.jl

Let me first give a little bit of an introduction to Symbolics.jl. Symbolics.jl is a symbolic computer algebra system (CAS) for the Julia programming language. Its symbolic expressions match Julia’s syntax, meaning dispatch and generic algorithms tend to handle everything else you need. For example, what if you want the symbolic expression for the inverse of a matrix? Just make a matrix of expressions, and call the Julia inverse function:

@variables y1 y2 y3 y4
A = [y1 y1+y2
     y3+y2 y4]
 
println(inv(A))
 
#[(y1^-1)*(true + ((y1 + y2)*(y1^-1)*(y2 + y3)*((y4 - ((y1 + y2)*(y1^-1)*(y2 + y3)))^-1))) -(y1 + y2)*(y1^-1)*((y4 - ((y1 + y2)*(y1^-1)*(y2 + y3)))^-1); -(y1^-1)*(y2 + y3)*((y4 - ((y1 + y2)*(y1^-1)*(y2 + y3)))^-1) (y4 - ((y1 + y2)*(y1^-1)*(y2 + y3)))^-1]

Notice that the only thing that’s directly used here from Symbolics.jl is the declaration of the variables for the symbolic expressions (the first line). The second line just creates a Julia matrix of these expressions. The last line prints the inv function from Julia’s Base library applied to this matrix of expressions.

Julia’s inverse function is generically written, and thus it just needs to have that the elements satisfy some properties of numbers. For example, they need to be able to add, subtract, multiply, and divide. Since our “number” type can do these operations, the inverse function “just works”. But also, by Julia’s magic, a separate function is created and compiled just for doing this, which makes this into a highly efficient function. So easy + efficient: the true promise of Julia is satisfied.

Going Deep: The ODE Solver

Now let’s push this. Let’s say we had a problem where we wanted to find out which initial condition is required in order to get out a specific value. One way to calculate this is to use Boundary Value Problem (BVP) solvers, but let’s say we will want to do this at a bunch of different timepoints.
How about using a numerical solver for ODEs, except instead of using numbers, let’s use symbolic expressions for our initial conditions so that way we can calculate approximate functions for the timepoints. Sounds fun and potentially useful, so let’s give it a try.

The ODE solvers for Julia are in the package DifferentialEquations.jl. Let’s solve the linear ODE:

$$ \frac{dy}{dt} = 2y $$

with an initial condition which is a symbolic variable. Following the tutorial, let’s swap out the numbers for symbolic expressions. To do this, we simply make the problem type and solve it:

using DifferentialEquations, Symbolics
@variables y0
f = (y,p,t) -> 2y
prob = ODEProblem(f,y0,(0.0,1.0))
sol = solve(prob,RK4(),dt=1/10,adaptive=false)
 
julia> println(sol)
 
retcode: Success
Interpolation: 3rd order Hermite
t: [0.0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 1.0]
u: Num[y0, 1.2214y0, 1.49181796y0, 1.8221064563440001y0, 2.225520825778562y0, 2.7182511366059354y0, 3.3200719382504893y0, 4.055135865379148y0, 4.952942945974091y0, 6.049524514212755y0, 7.388889241659459y0]

The solution is an array of symbolic expressions for what the RK4 method (the order 4 Runge-Kutta method) gives at each timepoint, starting at 0.0 and stepping 0.1 units at a time up to 1.0. We can then use the Symbolics.jl function build_function to turn the solution at the end into a function, and check it. For our reference, I will re-solve the differential equation at a much higher accuracy. We can test this against the true solution, which for the linear ODE we know this is:

$$ y(t) = y_0 \exp(2t) $$

sol = solve(prob,RK4(),dt=1/1000,adaptive=false)
end_solution = build_function(sol[end],y0,expression=Val{false})
 
println(end_solution(2)) # 14.778112197857341
println(2exp(2)) # 14.7781121978613
println(end_solution(3)) # 22.167168296786013
println(3exp(2)) # 22.16716829679195

We have successfully computed a really high accuracy approximation to our solution which is a function of the initial condition! We can even do this for systems of ODEs. For example, let’s get a function which approximates a solution to the Lotka-Volterra equation:

using Symbolics, DifferentialEquations
@variables x0 y0
u0 = [x0;y0]
f = function (dy,y,p,t)
  dy[1] = 1.5y[1] - y[1]*y[2]
  dy[2] = -3y[2] + y[1]*y[2]
end
prob = ODEProblem(f,u0,(0.0,1.0))
sol = solve(prob,RK4(),dt=1/2,adaptive=false)

The result is a stupidly large expression because it grows exponentially.

Going Past The Edge: What Happens With Incompatibility?

There are some caveats here. For example, if you wanted to use the more efficient version of ode45/dopri5 (Tsit5), you’ll get an error:

julia> sol = solve(prob,Tsit5())
ERROR: TypeError: non-boolean (Num) used in boolean context
Stacktrace:
  [1] ode_determine_initdt(u0::Vector{Num}, t::Float64, tdir::Float64, dtmax::Float64, abstol::Num, reltol::Num, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), prob::ODEProblem{Vector{Num}, Tuple{Float64, Float64}, true, SciMLBase.NullParameters, ODEFunction{true, var"#11#12", LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, integrator::OrdinaryDiffEq.ODEIntegrator{Tsit5, true, Vector{Num}, Nothing, Float64, SciMLBase.NullParameters, Float64, Num, Float64, Vector{Vector{Num}}, ODESolution{Num, 2, Vector{Vector{Num}}, Nothing, Nothing, Vector{Float64}, Vector{Vector{Vector{Num}}}, ODEProblem{Vector{Num}, Tuple{Float64, Float64}, true, SciMLBase.NullParameters, ODEFunction{true, var"#11#12", LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5, OrdinaryDiffEq.InterpolationData{ODEFunction{true, var"#11#12", LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Vector{Vector{Num}}, Vector{Float64}, Vector{Vector{Vector{Num}}}, OrdinaryDiffEq.Tsit5Cache{Vector{Num}, Vector{Num}, Vector{Num}, OrdinaryDiffEq.Tsit5ConstantCache{Num, Float64}}}, DiffEqBase.DEStats}, ODEFunction{true, var"#11#12", LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, OrdinaryDiffEq.Tsit5Cache{Vector{Num}, Vector{Num}, Vector{Num}, OrdinaryDiffEq.Tsit5ConstantCache{Num, Float64}}, OrdinaryDiffEq.DEOptions{Num, Num, Num, Float64, typeof(DiffEqBase.ODE_DEFAULT_NORM), typeof(LinearAlgebra.opnorm), Nothing, CallbackSet{Tuple{}, Tuple{}}, typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), DataStructures.BinaryMinHeap{Float64}, DataStructures.BinaryMinHeap{Float64}, Nothing, Nothing, Int64, Tuple{}, Tuple{}, Tuple{}}, Vector{Num}, Num, Nothing, OrdinaryDiffEq.DefaultInit})
    @ OrdinaryDiffEq ~\.julia\dev\OrdinaryDiffEq\src\initdt.jl:82
  [2] ...

What this is saying is that there is no way to evaluate the boolean expression in the adaptive code for choosing whether accept or reject a time step. The reason why this is used is because the adaptive algorithm uses normed errors in order to find out how to change the stepsize. However, in order to get a normed error, we’d actually have to know the initial condition… so this just isn’t going to work.

Conclusion

This is kind of a silly example just for fun, but there’s a more general statement here. In Julia, new algorithms can just be passing new types to pre-written functionality. This vastly decreases the amount of work that you actually have to do, without a loss to efficiency! You can add this to your Julia bag of tricks.

Edit: March 11, 2021

“This is kind of a silly example just for fun”, haha well by now this has become the core insight behind the ModelingToolkit.jl project. To see what this evolved into, check out Generalizing Automatic Differentiation to Automatic Sparsity, Uncertainty, Stability, and Parallelism. This blog post has now been updated to use the Symbolics.jl CAS.

4 thoughts on “Some Fun With Julia Types: Symbolic Expressions in the ODE Solver

  1. March 2021, Something In DifferentialEquatons.jl Has Changed?

    Some Fun With Julia Types: Symbolic Expressions in the ODE Solver

    using DifferentialEquations, SymEngine
    y0 = symbols(:y0)
    u0 = y0
    f = (y,p,t) -> 2y
    prob = ODEProblem(f,u0,(0.0,1.0))
    sol = solve(prob,RK4(),dt=1/10,adaptive=false)

    The first example now gives an error at this line:

    sol=solve(prob,RK4(), dt=1/10, adaptive=false)
    ERROR: ArgumentError: Object can have no free symbols

    I can’t print or plot the solution and get a gigantic stack trace back from Julia. Which free variable is it complaining about here?

    The 2nd example works perfectly.


  2. This ability of passing types to determine the behavior of a generic algorithm is one of the greatest features of modern programming languages, and the fact that the Julia community is doing it right really matters. Other languages have generics too, but the issue is that the packages are disconnected and reinvent the wheel (e.g. concepts like Arrays, Matrices, etc. change among alternative libraries in C++).


Write a Reply or Comment

Your email address will not be published. Required fields are marked *


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.