A Collection of Jacobian Sparsity Acceleration Tools for Julia


Over the summer there have been a whole suite of sparsity acceleration tools for Julia. These are encoded in the packages:

The toolchain is showcased in the following blog post by Pankaj Mishra, the student who build a lot of the Jacobian coloring and decompression framework. Langwen Huang setup the fast paths for structured matrices (tridiagonal, banded, and block-banded matrices) and also integrated these tools with DifferentialEquations.jl. Shashi Gowda then setup a mechanism for automatically detecting the sparsity of Julia programs (!!!).

A tutorial using this workflow together is described in the SparseDiffTools.jl README. In summary, to use the tools you have the following flow:

  1. Find your sparsity pattern, Jacobian structure (i.e. Jacobian type), or automatically detect it with SparsityDetection.jl.
  2. Call `matrix_colors(A)` from SparseDiffTools.jl to get the `colorvec` for A. This is the vector that the … READ MORE

Developing Julia Packages


October 4 2019 in Julia | Tags: | Author: Christopher Rackauckas

Have you ever wanted to develop your own package for the Julia programming language? Have you ever wanted to contribute a bug fix? Then this tutorial is for you! I will walk you through getting the community resources (Discourse and Slack) so that you can get help, get the Juno and GitKraken development environments going, and show all of the steps of building a package. In this video you will learn how to use modules, how to interactively update a package without recompiling, how to setup continuous integration testing, and how to get your package registered. In addition, I show how to “dev” a package to get a local copy to work on, and use this to give a bug fix to open a pull-request to fix an issue on … READ MORE

When do micro-optimizations matter in scientific computing?


September 3 2019 in Julia, Programming, Scientific ML | Tags: | Author: Christopher Rackauckas

Something that has been bothering me about discussions about microbenchmarks is that people seem to ignore that the benchmarks are highly application-dependent. The easiest way to judge whether the benchmark really matters to a particular application is the operation overhead of the largest and most common calls. If you have a single operation dominating all of your runtime 99.9%, making everything else 100x faster still won’t do anything to your real runtime performance. But at the same time, if your bottleneck is some small operation that’s in a tight loop, then that operation may be your bottleneck. This is a classic chart to keep in the back of your mind when considering optimizations.

Here is a very brief overview on what to think about when optimizing code and how to figure out when to stop.

Function Call Overhead

When dealing with … READ MORE

The Essential Tools of Scientific Machine Learning (Scientific ML)


Scientific machine learning is a burgeoning discipline which blends scientific computing and machine learning. Traditionally, scientific computing focuses on large-scale mechanistic models, usually differential equations, that are derived from scientific laws that simplified and explained phenomena. On the other hand, machine learning focuses on developing non-mechanistic data-driven models which require minimal knowledge and prior assumptions. The two sides have their pros and cons: differential equation models are great at extrapolating, the terms are explainable, and they can be fit with small data and few parameters. Machine learning models on the other hand require “big data” and lots of parameters but are not biased by the scientists ability to correctly identify valid laws and assumptions.

However, the recent trend has been to merge the two disciplines, allowing explainable models that are data-driven, require less data than traditional machine learning, and utilize the … READ MORE

Scientific AI: Domain Models with Integrated Machine Learning


July 25 2019 in Uncategorized | Tags: | Author: Christopher Rackauckas

Modeling practice seems to be partitioned into scientific models defined by mechanistic differential equations and machine learning models defined by parameterizations of neural networks. While the ability for interpretable mechanistic models to extrapolate from little information is seemingly at odds with the big data “model-free” approach of neural networks, the next step in scientific progress is to utilize these methodologies together in order to emphasize their strengths while mitigating weaknesses. In this talk we will describe four separate ways that we are merging differential equations and deep learning through the power of the DifferentialEquations.jl and Flux.jl libraries. Data-driven hypothesis generation of model structure, automated real-time control of dynamical systems, accelerated of PDE solving, and memory-efficient deep learning workflows will all shown to be derived from this common computational structure … READ MORE

Neural Jump SDEs (Jump Diffusions) and Neural PDEs


This is just an exploration of some new neural models I decided to jot down for safe keeping. DiffEqFlux.jl gives you the differentiable programming tools to allow you to use any DifferentialEquations.jl problem type (DEProblem) mixed with neural networks. We demonstrated this before, not just with neural ordinary differential equations, but also with things like neural stochastic differential equations and neural delay differential equations.

At the time we made DiffEqFlux, we were the “first to the gate” for many of these differential equations types and left it as an open question for people to find a use for these tools. And judging by the Arxiv papers that went out days after NeurIPS submissions were due, it looks like people now have justified some machine learning use cases for them. There were two separate papers on neural … READ MORE

Zero-Cost Abstractions in Julia: Indexing Vectors by Name with LabelledArrays


November 12 2018 in Uncategorized | Tags: | Author: Christopher Rackauckas

The ability to build robust array type abstractions without overhead is one of the most fantastic features of Julia. It can be surprising how much one can do with the idea of an array to end up with better code. In this blog post I want to show how you can utilize Julia’s interface tools and @generated functions to build these kinds of zero-cost abstractions.

What is a zero-cost abstraction?

First let’s define a zero-cost abstraction. A zero-abstraction is a coding tool that has a zero run time cost associated with using it. Essentially what it does is it transforms into the fastest possible code during compilation so that its use has no effect on the run time of your code.

Example: Diagonal Matrices

A quick example is simple type wrappers in Julia. An example of this from the standard library is the Diagonal … READ MORE

The Nonlinear Effect of Code Speed on Productivity


October 29 2018 in Uncategorized | Tags: | Author: Christopher Rackauckas

I obsess over code performance, probably a little bit too much. But there’s a big reason for wanting a code to run in less than an hour instead of slightly more than two hours. The reason is because a code’s speed interacts with how we go about our day, and this speed can have a nonlinear effect on our productivity.

Here’s a concrete example. When I was a PhD student, I had a project which was solving ensembles of stochastic partial differential equations which were a model of the zebrafish hindbrain. Each run of this ensemble took about two hours and ten minutes to perform. For a good chunk of time, I needed to keep running these ensembles in order to manually check the output to tweak the model until it started to look realistic. Tweaking the model took only a … READ MORE

Some State of the Art Packages in Julia v1.0


August 27 2018 in Julia, Programming | Tags: | Author: Christopher Rackauckas

In this post I would like to explain some of the state of the art packages in Julia and how they are pushing forward their disciplines. I think the part I would really like to make clear is the ways which Julia gives these packages a competitive advantage and how these features are being utilized to reach their end goals. Maybe this will generate some more ideas!

What are the advantages for Julia in terms of package development?

Using Python/R to use other people’s packages is perfectly fine since then your code will just be as fast as the package code if all your time is in package code calls (I purposely leave MATLAB off this list because its interoperability is much more difficult to use). There is one big exception where using an efficient package can still be slow even when … READ MORE

PuMaS.jl: Pharmaceutical Modeling and Simulation Engine


August 13 2018 in Biology, Julia, Programming | Tags: | Author: Christopher Rackauckas

Here is an introduction to a pharmaceutical modeling project which I will be releasing in the near future. More details to come.