Multi-node Parallelism in Julia on an HPC (XSEDE Comet)
February 23 2016 in HPC, Julia, Programming | Tags: comet, HPC, julia, multi-node, XSEDE | Author: Christopher Rackauckas
Today I am going to show you how to parallelize your Julia code over some standard HPC interfaces. First I will go through the steps of parallelizing a simple code, and then running it with single-node parallelism and multi-node parallelism. The compute resources I will be using are the XSEDE (SDSC) Comet computer (using Slurm) and UC Irvine’s HPC (using SGE) to show how to run the code in two of the main job schedulers. You can follow along with the Comet portion by applying and getting a trial allocation. READ MORE
Comparison of US HPC Resources
February 20 2016 in HPC | Tags: HPC, XSEDE | Author: Christopher Rackauckas
It can sometimes be quite daunting to get the information you need. When looking for the right HPC to run code on, there are a lot of computers in the US to choose from. I decided to start compiling a lot of the information into some tables in order to make it easier to understand the options. I am right now starting with a small subset which includes Blue Waters and some of XSEDE with rows for the parts that interest me, but if you would like for me to add to the list please let me know. READ MORE
Using Julia’s C Interface to Utilize C Libraries
February 4 2016 in C, Julia, Programming | Tags: | Author: Christopher Rackauckas
I recently ran into the problem that Julias’s (GNU Scientific Library) GSL.jl library is too new, i.e. some portions don’t work as of early 2016. However, to solve my problem I needed access to adaptive Monte Carlo integration methods. This means it was time to go in depth in Julia’s C interface. I will go step by step on how I created and compiled the C code, and called it from Julia. READ MORE
Setting Variables Element-wise in a Matrix in Mathematica
February 2 2016 in Mathematica, Programming | Tags: | Author: Christopher Rackauckas
This is the latest entry on tip, tricks, and hacks in Mathematica. In my latest problem I need to solve for the components of many matrices using optimization algorithms. Mathematica does not allow one to straight pass matrices into the algorithms, so one has to perform the algorithms on variables that stand in for the components. This is what I did with the matrix
A011 = 2; A021 = 1.4; ...
But that would be inelegant and hard to maintain. Luckily, I found a way to specify them as matrices. I attach a screenshot to show it in all … READ MORE
Simple Parallel Optimization in Mathematica
February 2 2016 in Mathematica | Tags: optimization, parallel | Author: Christopher Rackauckas
A quick search on Google did not get hits for a standard method of parallelizing NMaximize and NMinimize, so I wanted to share how I did it.
My implementation is a simple use of a Map-Reduce type of parallelism. What this means is that we map the same problem out to N many processes, and when they finish they each give one result, and we apply a reduction function to reduce the N results to 1. So what we will do is map the NMaximize function to each of N cores, where they will each solve it on a random seed. From there, each process will return what it found as the minimum, and we will take the minimum of these minimums as our best estimate of the global minimums.
Notice that this is not optimal in all cases: for … READ MORE
Why is Mathematica Stopping My Long Calculation?
January 30 2016 in Mathematica, Programming | Tags: | Author: Christopher Rackauckas
You set it all up, you know it’s going to take a few days, and you run it. You check it a few hours later, it’s all good! The next morning… there are no cells computing still?
Mathematica’s documentation isn’t that helpful for what just happened. If you were using a function like FullSimplify, there are time limits on it. However, it will give you an error/warning if it hits the time limit, so if you ended up with a blank screen with no calculations, that’s not it.
This happened to me. I found out that the hidden culprit (for me) was Mathematica’s history tracking. The fix is simple, add the following code to the top of your file:
$HistoryLength = 10;
What’s happening is that Mathematica saves its entire command history by … READ MORE
Julia iFEM3: Solving the Poisson Equation via MATLAB Interfacing
January 24 2016 in FEM, Julia, MATLAB, Programming | Tags: FEM, julia, Poisson Equation | Author: Christopher Rackauckas
This is the third part in the series for building a finite element method solver in Julia. Last time we used our mesh generation tools to assemble the stiffness matrix. The details for what will be outlined here can be found in this document. I do not want to dwell too much on the actual code details since they are quite nicely spelled out there, so instead I will focus on the porting of the code. The full code is at the bottom for reference.
The Ups, Downs, and Remedies to Math in Julia
At this point I have been coding in Julia for over a week and have been loving it. I come into each new function knowing that if I just change the array dereferencing from () to [] and wrap vec() calls around vectors being used as … READ MORE
Julia iFEM 2: Optimizing Stiffness Matrix Assembly
January 23 2016 in FEM, Julia, MATLAB, Programming | Tags: julia, optimization, sparse, vectorization | Author: Christopher Rackauckas
This is the second post looking at building a finite element method solver in Julia. The first post was about mesh generation and language bindings. In this post we are going to focus on performance. We start with the command from the previous post:
node,elem = squaremesh([0 1 0 1],.01)
which generates an array elem where each row holds the reference indices to the 3 points which form a triangle (element). The actual locations of these points are in the array node, and so node(1) gives the points in the (x,y)-plane for the
The approach to building the stiffness matrix for the … READ MORE
Optimizing .*: Details of Vectorization and Metaprogramming
January 21 2016 in Julia, MATLAB | Tags: BLAS, de-vectorization, high performance computing, Linpack, MKL, VML | Author: Christopher Rackauckas
For many of us mathematicians, we were taught to use MATLAB, and we were taught to vectorize everything. I mean obviously if we have matrices
A.*B.*C
No questions to ask, right? Actually, this code isn’t as optimized as you’d think. Lets dig deeper.
BLAS, Linpack, and MKL
The reason you are always told by “the lords of numerical math” to vectorize your code is because very smart programmers worked really hard on making basic things work well. Most of the “standardized” vectorized computations are calling subroutines from packages known as BLAS and LINPACK. To see what version your MATLAB is using, you can call
Quick Optimizations in Julia for Performance: A Practical Example
January 19 2016 in Julia, MATLAB | Tags: AVX512, julia, performance, SIMD, threading | Author: Christopher Rackauckas
Let’s take a program which plots the standard logistic map:
r = 2.9:.00005:4; numAttract = 100; steady = ones(length(r),1)*.25; for i=1:300 ## Get to steady state steady = r.*steady.*(1-steady); end x = zeros(length(steady),numAttract); x[:,1] = steady; for i=2:numAttract ## Now grab some values at the attractor x[:,i] = r.*x[:,i-1].*(1-x[:,i-1]); end using PyPlot; fig = figure(figsize=(20,10)); plot(collect(r),x,"b.",markersize=.06) savefig("plot.png",dpi=300);
This plots the logistic map. If you take the same code and change the array … READ MORE