# Optimal Number of Workers for Parallel Julia

#### April 16 2016 in HPC, Julia, Programming, Stochastics | Tags: BLAS, hyperthreading, julia, parallel computing, workers | Author: Christopher Rackauckas

How many workers do you choose when running a parallel job in Julia? The answer is easy right? The number of physical cores. We always default to that number. For my Core i7 4770K, that means it's 4, not 8 since that would include the hyperthreads. On my FX8350, there are 8 cores, but only 4 floating-point units (FPUs) which do the math, so in mathematical projects, I should use 4, right? I want to demonstrate that it's not that simple.

## Where the Intuition Comes From

Most of the time when doing scientific computing you are doing parallel programming without even knowing it. This is because a lot of vectorized operations are "implicitly paralleled", meaning that they are multi-threaded behind the scenes to make everything faster. In other languages like Python, MATLAB, and R, this is also the case. Fire up MATLAB ... READ MORE

# Holding off on Julia for a little bit... you should blog!

#### March 28 2016 in Uncategorized | Tags: | Author: Christopher Rackauckas

I think I am going to stop posting on Julia for a little bit. I looked at JuliaBloggers.com and realized my blog is hogging it to much. Since I have gone through a pretty good arc, starting at writing FEM code to multiple GPU / Xeon Phi computing, I think that means I will focus on a few other topics for a little bit. However, I will be doing a blog post on native Xeon Phi usage via Julia's ParallelAccelerator.jl sometime soon, so be prepared for that.

In the meantime, stay tuned for topics like stochastic numerics, theoretical biology, Mathematica, and HPCs in the near future.

If you have the time, start up your own Julia blog and start contributing to JuliaBloggers!

# Benchmarks of Multidimensional Stack Implementations in Julia

#### March 20 2016 in Julia, Programming | Tags: benchmark, data structures, julia, stack | Author: Christopher Rackauckas

Datastructures.jl claims it's fast. How does it do? I wrote some quick codes to check it out. What I wanted to do is find out which algorithm does best for implementing a stack where each element is three integers. I tried filling a pre-allocated array, pushing into three separate vectors, and different implementations of the stack from the DataStructures.jl package.

function baseline()

stack = Array{Int64,2}(1000000,3)

for i=1:1000000,j=1:3

stack[i,j]=i

end

end

function baseline2()

stack = Array{Int64,2}(1000000,3)

for j=1:3,i=1:1000000

stack[i,j]=i

... READ MORE

# MATLAB 2016a Release Summary for Scientific Computing

#### March 15 2016 in MATLAB, Programming | Tags: 2016a, MATLAB, optimization, parallel | Author: Christopher Rackauckas

There is a lot to read every time MATLAB releases a new version. Here is a summary of what has changed in 2016a from the eyes of someone doing HPC/Scientific Computing/Numerical Analysis. This means I will leave off a lot, and you should check it out yourself but if you're using MATLAB for science then this may cover most of the things you care about.

- Support for sparse matrices on the GPU. A nice addition is sprand and pcg (Preconditioned Conjugate Gradient solvers) for sprase GPU matrices.
- One other big change in the parallel computing toolbox is you can now set nonlinear solvers to estimate gradients and Jacobians in parallel. This should be a nice boost to the MATLAB optimization toolbox.
- In the statistics and machine learning toolbox, they added some algorithms for high dimensional data and now let you run kmeans ... READ MORE

# Interfacing with a Xeon Phi via Julia

#### March 4 2016 in C, HPC, Julia, Programming, Stochastics, Xeon Phi | Tags: C, julia, MIC, OpenMP, parallel, Xeon Phi | Author: Christopher Rackauckas

(Disclaimer: This is not a full-Julia solution for using the Phi, and instead is a tutorial on how to link OpenMP/C code for the Xeon Phi to Julia. There may be a future update where some of these functions are specified in Julia, and Intel's compilertools.jl looks like a viable solution, but for now it's not possible.)

Intel's Xeon Phi has a lot of appeal. It's an instant cluster in your computer, right? It turns out it's not quite that easy. For one, the installation process itself is quite tricky, and the device has stringent requirements for motherboard choices. Also, making out at over a taraflop is good, but not quite as high as NVIDIA's GPU acceleration cards.

However, there are a few big reasons why I think our interest in the Xeon Phi should be renewed. For one, Intel ... READ MORE

# Multiple-GPU Parallelism on the HPC with Julia

#### February 28 2016 in CUDA, HPC, Julia, Programming | Tags: CUDA, gpu, HPC, julia | Author: Christopher Rackauckas

This is the exciting Part 3 to using Julia on an HPC. First I got you started with using Julia on multiple nodes. Second, I showed you how to get the code running on the GPU. That gets you pretty far. However, if you got a trial allocation on Cometand started running jobs, you may have noticed when looking at the architecture that you're not getting to use the full GPU. In the job script I showed you, I asked for 2 GPUs. Why? Well, that's because the flagship NVIDIA GPU, the Tesla K80, is actually a duel GPU and you have to control the two parts separately. You may have been following along on your own computer and have been wondering how you use the multiple GPUs in your setup as well. This tutorial will ... READ MORE

# Tutorial for Julia on the HPC with GPUs

#### February 23 2016 in CUDA, HPC, Julia, Programming | Tags: CUDA, gpu, HPC, julia | Author: Christopher Rackauckas

This is a continuous of my previous post on using Julia on the XSEDE Comet HPC. Check that out first for an explanation of the problem. In that problem, we wished to solve for the area of a region where a polynomial was less than 1, which was calculated by code like: READ MORE

# Multi-node Parallelism in Julia on an HPC (XSEDE Comet)

#### February 23 2016 in HPC, Julia, Programming | Tags: comet, HPC, julia, multi-node, XSEDE | Author: Christopher Rackauckas

Today I am going to show you how to parallelize your Julia code over some standard HPC interfaces. First I will go through the steps of parallelizing a simple code, and then running it with single-node parallelism and multi-node parallelism. The compute resources I will be using are the XSEDE (SDSC) Comet computer (using Slurm) and UC Irvine's HPC (using SGE) to show how to run the code in two of the main job schedulers. You can follow along with the Comet portion by applying and getting a trial allocation. READ MORE

# Blog Upgraded: A Good Experience With Bluehost

#### February 23 2016 in Uncategorized | Tags: | Author: Christopher Rackauckas

My blog is back up and now it's on a more powerful server to deal with the increase in traffic I have been getting. I really want to give a shout out to Bluehost's customer support, I went onto their live chat and talked with someone who really knew the different options. I am going to explain what I learned from Devin and what choice I finally made.

## A Rundown of Bluehost's Hosting Options

Bluehost's hosting options can be summarized as follows:

- Dedicated hosting. If you need this you know what it is.
- VPS. A Virtual Private Server. If you get this kind of server, the server management is up to you. This is the "second most powerful" option (only less than Dedicated), but I did not want to take on the overhead of managing the resource, especially since it would be more than ... READ MORE

# Comparison of US HPC Resources

#### February 20 2016 in HPC | Tags: HPC, XSEDE | Author: Christopher Rackauckas

It can sometimes be quite daunting to get the information you need. When looking for the right HPC to run code on, there are a lot of computers in the US to choose from. I decided to start compiling a lot of the information into some tables in order to make it easier to understand the options. I am right now starting with a small subset which includes Blue Waters and some of XSEDE with rows for the parts that interest me, but if you would like for me to add to the list please let me know. READ MORE