In Julia, you can naturally write generic algorithms which work on any type which has specific "actions". For example, an "AbstractArray" is a type which has a specific set of functions implemented. This means that in any generically-written algorithm that wants an array, you can give it an AbstractArray and it will "just work". This kind of abstraction makes it easy to write a simple algorithm and then use that same exact code for other purposes. For example, distributed computing can be done by just passing in a DistributedArray, and the algorithm can be accomplished on the GPU by using a GPUArrays. Because Julia's functions will auto-specialize on the types you give it, Julia automatically makes efficient versions specifically for the types you pass in which, at compile-time, strips away the costs of the abstraction.

This means that one way to come up with new efficient algorithms in Julia is to simply pass a new type to an existing algorithm. Does this kind of stuff really work? Well, I wanted to show that you can push this to absurd places by showing one of my latest experiments.

### Note

The ODE solver part of this post won't work on release until some tags go through (I'd say at least in a week?). The fix that I had to do was make the "internal instability checks", which normally default to checking if the number is NaN, instead always be false for numbers which aren't AbstractFloats (meaning, no matter what don't halt integration due to instability). Do you understand why that would be a problem when trying to use a symbolic expression? This is a good question to keep in mind while reading.

## SymEngine.jl

Let me first give a little bit of an introduction to SymEngine.jl. SymEngine is a re-write of SymPy into C++ for increased efficiency, and SymEngine.jl is a wrapper for Julia. The only part of the package I will use is its symbolic expression type, the SymEngine.Basic.

In many cases, this is all you need out of the package. This is because dispatch and generic algorithms tend to handle everything else you need. For example, what if you want the symbolic expression for the inverse of a matrix? Just make a matrix of expressions, and call the Julia inverse function:

y1,y2,y3,y4 = @vars y1 y2 y3 y4
A = [y1 y1+y2
y3+y2 y4]

println(inv(A))

#SymEngine.Basic[(1 + y2*y3/(y1*(y4 - y2*y3/y1)))/y1 -y2/(y1*(y4 - y2*y3/y1)); -y3/(y1*(y4 - y2*y3/y1)) (y4 - y2*y3/y1)^(-1)]

Notice that the only thing that's directly used here from SymEngine is the declaration of the variables for the symbolic expressions (the first line). The second line just creates a Julia matrix of these expressions. The last line prints the inv function from Julia's Base library applied to this matrix of expressions.

Julia's inverse function is generically written, and thus it just needs to have that the elements satisfy some properties of numbers. For example, they need to be able to add, subtract, multiply, and divide. Since our "number" type can do these operations, the inverse function "just works". But also, by Julia's magic, a separate function is created and compiled just for doing this, which makes this into a highly efficient function. So easy + efficient: the true promise of Julia is satisfied.

## Going Deep: The ODE Solver

Now let's push this. Let's say we had a problem where we wanted to find out which initial condition is required in order to get out a specific value. One way to calculate this is to use Boundary Value Problem (BVP) solvers, but let's say we will want to do this at a bunch of different timepoints.
How about using a numerical solver for ODEs, except instead of using numbers, let's use symbolic expressions for our initial conditions so that way we can calculate approximate functions for the timepoints. Sounds fun and potentially useful, so let's give it a try.

The ODE solvers for Julia are in the package DifferentialEquations.jl. Let's solve the linear ODE:

$\frac{dy}{dt} = 2y$

with an initial condition which is a symbolic variable. Following the tutorial, let's swap out the numbers for symbolic expressions. To do this, we simply make the problem type and solve it:

using DifferentialEquations, SymEngine
y0 = symbols(:y0)
u0 = y0
f = (t,y) -> 2y
prob = ODEProblem(f,u0,(0.0,1.0))
sol = solve(prob,RK4(),dt=1/10)

println(sol)
# SymEngine.Basic[y0,1.2214*y0,1.49181796*y0,1.822106456344*y0,2.22552082577856*y0,2.71825113660594*y0,3.32007193825049*y0,4.05513586537915*y0,4.95294294597409*y0,6.04952451421275*y0,7.38888924165946*y0,7.38888924165946*y0]

The solution is an array of symbolic expressions for what the RK4 method (the order 4 Runge-Kutta method) gives at each timepoint, starting at 0.0 and stepping 0.1 units at a time up to 1.0. We can then use the SymEngine function lambdify to turn the solution at the end into a function, and check it. For our reference, I will re-solve the differential equation at a much higher accuracy. We can test this against the true solution, which for the linear ODE we know this is:

$y(t) = y_0 \exp(2t)$

sol = solve(prob,RK4(),dt=1/1000)
end_solution  = lambdify(sol[end])

println(end_solution(2)) # 14.778112197857341
println(2exp(2)) # 14.7781121978613
println(end_solution(3)) # 22.167168296786013
println(3exp(2)) # 22.16716829679195

We have successfully computed a really high accuracy approximation to our solution which is a function of the initial condition! We can even do this for systems of ODEs. For example, let's get a function which approximates a solution to the Lotka-Volterra equation:

using SymEngine, DifferentialEquations
# Make our initial condition be symbolic expressions from SymEngine
x0,y0 = @vars x0 y0
u0 = [x0;y0]
f = function (t,y,dy)
dy[1] = 1.5y[1] - y[1]*y[2]
dy[2] = -3y[2] + y[1]*y[2]
end
prob = ODEProblem(f,u0,(0.0,1.0))
sol = solve(prob,RK4(),dt=1/2)

The result is a stupidly large expression because it grows exponentially and SymEngine doesn't have a simplification function yet (it's still pretty new!), but hey this is super cool anyways.

## Going Past The Edge: What Happens With Incompatibility?

There are some caveats here. You do need to work through MethodErrors. For example, if you wanted to use the more efficient version of ode45/dopri5 (Tsit5), you'll get an error:

sol = solve(prob,Tsit5())

MethodError: no method matching abs2(::SymEngine.Basic)
Closest candidates are:
abs2(!Matched::Bool) at bool.jl:38
abs2(!Matched::ForwardDiff.Dual{N,T<:Real}) at C:\Users\Chris\.julia\v0.5\ForwardDiff\src\dual.jl:325
abs2(!Matched::Real) at number.jl:41
...

What this is saying is that there is no abs2 function for symbolic expressions. The reason why this is used is because the adaptive algorithm uses normed errors in order to find out how to change the stepsize. However, in order to get a normed error, we'd actually have to know the initial condition... so this just isn't going to work.

## Conclusion

This is kind of a silly example just for fun, but there's a more general statement here. In Julia, new algorithms can just be passing new types to pre-written functionality. This vastly decreases the amount of work that you actually have to do, without a loss to efficiency! You can add this to your Julia bag of tricks.

says: