# Finalizing Your Julia Package: Documentation, Testing, Coverage, and Publishing

#### May 16 2016 in Julia | Tags: AppVoyer, coverage, documentation, Documenter.jl, julia, testing, Travis.CI | Author: Christopher Rackauckas

In this tutorial we will go through the steps to finalizing a Julia package. At this point you have some functionality you wish to share with the world... what do you do? You want to have documentation, code testing each time you commit (on all the major OSs), a nice badge which shows how much of the code is tested, and put it into metadata so that people could install your package just by typing Pkg.add("Pkgname"). How do you do all of this?

Note: At anytime feel free to checkout my package repository DifferentialEquations.jl which should be a working example.

## Generate the Package and Get it on Github

First you will want to generate your package and get it on Github repository. Make sure you have a Github account, and then setup the environment variables in the git shell:

# Optimal Number of Workers for Parallel Julia

#### April 16 2016 in HPC, Julia, Programming, Stochastics | Tags: BLAS, hyperthreading, julia, parallel computing, workers | Author: Christopher Rackauckas

How many workers do you choose when running a parallel job in Julia? The answer is easy right? The number of physical cores. We always default to that number. For my Core i7 4770K, that means it's 4, not 8 since that would include the hyperthreads. On my FX8350, there are 8 cores, but only 4 floating-point units (FPUs) which do the math, so in mathematical projects, I should use 4, right? I want to demonstrate that it's not that simple.

## Where the Intuition Comes From

Most of the time when doing scientific computing you are doing parallel programming without even knowing it. This is because a lot of vectorized operations are "implicitly paralleled", meaning that they are multi-threaded behind the scenes to make everything faster. In other languages like Python, MATLAB, and R, this is also the case. Fire up MATLAB ... READ MORE

# Benchmarks of Multidimensional Stack Implementations in Julia

#### March 20 2016 in Julia, Programming | Tags: benchmark, data structures, julia, stack | Author: Christopher Rackauckas

Datastructures.jl claims it's fast. How does it do? I wrote some quick codes to check it out. What I wanted to do is find out which algorithm does best for implementing a stack where each element is three integers. I tried filling a pre-allocated array, pushing into three separate vectors, and different implementations of the stack from the DataStructures.jl package.

function baseline()

stack = Array{Int64,2}(1000000,3)

for i=1:1000000,j=1:3

stack[i,j]=i

end

end

function baseline2()

stack = Array{Int64,2}(1000000,3)

for j=1:3,i=1:1000000

stack[i,j]=i

... READ MORE

# Interfacing with a Xeon Phi via Julia

#### March 4 2016 in C, HPC, Julia, Programming, Stochastics, Xeon Phi | Tags: C, julia, MIC, OpenMP, parallel, Xeon Phi | Author: Christopher Rackauckas

(Disclaimer: This is not a full-Julia solution for using the Phi, and instead is a tutorial on how to link OpenMP/C code for the Xeon Phi to Julia. There may be a future update where some of these functions are specified in Julia, and Intel's compilertools.jl looks like a viable solution, but for now it's not possible.)

Intel's Xeon Phi has a lot of appeal. It's an instant cluster in your computer, right? It turns out it's not quite that easy. For one, the installation process itself is quite tricky, and the device has stringent requirements for motherboard choices. Also, making out at over a taraflop is good, but not quite as high as NVIDIA's GPU acceleration cards.

However, there are a few big reasons why I think our interest in the Xeon Phi should be renewed. For one, Intel ... READ MORE

# Multiple-GPU Parallelism on the HPC with Julia

#### February 28 2016 in CUDA, HPC, Julia, Programming | Tags: CUDA, gpu, HPC, julia | Author: Christopher Rackauckas

This is the exciting Part 3 to using Julia on an HPC. First I got you started with using Julia on multiple nodes. Second, I showed you how to get the code running on the GPU. That gets you pretty far. However, if you got a trial allocation on Cometand started running jobs, you may have noticed when looking at the architecture that you're not getting to use the full GPU. In the job script I showed you, I asked for 2 GPUs. Why? Well, that's because the flagship NVIDIA GPU, the Tesla K80, is actually a duel GPU and you have to control the two parts separately. You may have been following along on your own computer and have been wondering how you use the multiple GPUs in your setup as well. This tutorial will ... READ MORE

# Tutorial for Julia on the HPC with GPUs

#### February 23 2016 in CUDA, HPC, Julia, Programming | Tags: CUDA, gpu, HPC, julia | Author: Christopher Rackauckas

This is a continuous of my previous post on using Julia on the XSEDE Comet HPC. Check that out first for an explanation of the problem. In that problem, we wished to solve for the area of a region where a polynomial was less than 1, which was calculated by code like: READ MORE

# Multi-node Parallelism in Julia on an HPC (XSEDE Comet)

#### February 23 2016 in HPC, Julia, Programming | Tags: comet, HPC, julia, multi-node, XSEDE | Author: Christopher Rackauckas

Today I am going to show you how to parallelize your Julia code over some standard HPC interfaces. First I will go through the steps of parallelizing a simple code, and then running it with single-node parallelism and multi-node parallelism. The compute resources I will be using are the XSEDE (SDSC) Comet computer (using Slurm) and UC Irvine's HPC (using SGE) to show how to run the code in two of the main job schedulers. You can follow along with the Comet portion by applying and getting a trial allocation. READ MORE

# Using Julia's C Interface to Utilize C Libraries

#### February 4 2016 in C, Julia, Programming | Tags: | Author: Christopher Rackauckas

I recently ran into the problem that Julias's (GNU Scientific Library) GSL.jl library is too new, i.e. some portions don't work as of early 2016. However, to solve my problem I needed access to adaptive Monte Carlo integration methods. This means it was time to go in depth in Julia's C interface. I will go step by step on how I created and compiled the C code, and called it from Julia. READ MORE

# Julia iFEM3: Solving the Poisson Equation via MATLAB Interfacing

#### January 24 2016 in FEM, Julia, MATLAB, Programming | Tags: FEM, julia, Poisson Equation | Author: Christopher Rackauckas

This is the third part in the series for building a finite element method solver in Julia. Last time we used our mesh generation tools to assemble the stiffness matrix. The details for what will be outlined here can be found in this document. I do not want to dwell too much on the actual code details since they are quite nicely spelled out there, so instead I will focus on the porting of the code. The full code is at the bottom for reference.

## The Ups, Downs, and Remedies to Math in Julia

At this point I have been coding in Julia for over a week and have been loving it. I come into each new function knowing that if I just change the array dereferencing from () to [] and wrap vec() calls around vectors being used as ... READ MORE

# Julia iFEM 2: Optimizing Stiffness Matrix Assembly

#### January 23 2016 in FEM, Julia, MATLAB, Programming | Tags: julia, optimization, sparse, vectorization | Author: Christopher Rackauckas

This is the second post looking at building a finite element method solver in Julia. The first post was about mesh generation and language bindings. In this post we are going to focus on performance. We start with the command from the previous post:

node,elem = squaremesh([0 1 0 1],.01)

which generates an array elem where each row holds the reference indices to the 3 points which form a triangle (element). The actual locations of these points are in the array node, and so node(1) gives the points in the (x,y)-plane for the th point. What the call is saying is that these are generated for the unit square with mesh-size .01, meaning ... READ MORE