User Tools

Site Tools


parallel-help

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

parallel-help [2018/08/28 12:05] (current)
Line 1: Line 1:
 +## MAYBE EXPLAIN
 +
 +* FIXME what is https://​docs.julialang.org/​en/​stable/​stdlib/​parallel/​Base.@async
 +
 +* FIXME show use of `spawn` to run a task.
 +
 +```text
 +t = @async run(server, 3000)
 +
 +ex = InterruptException()
 +Base.throwto(t,​ ex)
 +close(http.sock) # ideally HttpServer would catch exception to cleanup
 +```
 +
 +
 +
 +## JuliaRun
 +
 +* FIXME Explain JuliaRun.
 +
 +
 +
 +---------------
 +
 +## Memory Sharing FIXME
 +
 +|  WARNING ​ |  **Julialang plans to rework the parallel interface, so this chapter will be completely (re)worked then** ​ | 
 +
 +
 +
 +
 +---------------
 +
 +## Custom Multitasking Schedulers FIXME
 +
 +FIXME Julia has an [example scheduler](https://​docs.julialang.org/​en/​stable/​manual/​parallel-computing). ​ I hope I have convinced the julia folks to allow sending signals back for.  (I.e., stop scheduling more processes; we have the answer already.) ​ See [https://​github.com/​JuliaLang/​julia/​issues/​26659].
 +
 +* figure out how to wait for the next available core, figure out what we want to do, dispatch to it, and go back to sleeping until another core becomes available (or timeout).
 +
 +
 +
 +### Locks for Synchronization
 +
 +* https://​docs.julialang.org/​en/​stable/​stdlib/​parallel/#​Synchronization-Primitives-1
 +
 +* want to wait dynamically---i.e.,​ examine results before I queue much more
 +
 +* `@sync` enables parallel processing. ​ `@async println` makes it possible to force atomic writes! ​ `@sync @parallel` runs parallel processes (but watch out for sharing write memory).
 +
 +* @sync [ @async, @spawn, @spawnat and @parallel ]
 +
 +* check out @fetch (with remotechannel); ​ (network does some asynch listen)/​accept/​etc.
 +
 +* Remotechannel can be used for synching many processes.
 +
 +
 +---------------
 +
 +## SIMD
 +
 +Julia tries to vectorize and use CPU SIMD instructions whenever it can recognize them.  This is not always successful. ​ For one, SIMD always work in batches of 4 on Intel CPUs.  A non-standard package, [SIMD.jl](https://​github.com/​eschnett/​SIMD.jl) may help.  It defines `Vec` (not `Vector`). ​ Here is the package example:
 +
 +QUESTION FIXME Give a SIMD example to work.
 +
 +```juliafix
 +using SIMD
 +function vadd!{N,​T}(xs::​Vector{T},​ ys::​Vector{T},​ ::​Type{Vec{N,​T}})
 +    @assert length(ys) == length(xs)
 +    @assert length(xs) % N == 0
 +    @inbounds for i in 1:​N:​length(xs)
 +        xv = vload(Vec{N,​T},​ xs, i)
 +        yv = vload(Vec{N,​T},​ ys, i)
 +        xv += yv
 +        vstore(xv, xs, i)
 +    end
 +end
 +```
 +
 +`@inbounds` turns off subscript (exception) checking, which is necessary. ​ `julia -O3` starts up with nice optimization.
 +
 +```julianoval
 +# Create a vector where all elements are Float64(1):
 +xs = Vec{4,​Float64}(1)
 +
 +# Create a vector from a tuple, and convert it back to a tuple:
 +ys = Vec{4,​Float32}((1,​2,​3,​4))
 +ys1 = NTuple{4,​Float32}(ys)
 +y2 = ys[2]   # getindex
 +
 +# Update one element of a vector:
 +ys = setindex(ys,​ 5, 3)   # cannot use ys[3] = 5
 +```
 +
 +FIXME SIMD needs to become a real example with timing.
 +
 +
 +
 +---------------
 +
 +## GPU Operations
 +
 +Julia provides different interfaces to GPU programming:​ [OpenCL.jl](https://​github.com/​JuliaGPU/​OpenCL.jl) and [CUDArt.jl](https://​github.com/​JuliaGPU/​CUDArt.jl) provide access to the OpenCL and CUDA environment. ​ The user can write kernels in CUDA/OpenCL C and execute them on the GPU, but do the management in Julia. ​ julia can also compile for the GPU with [CUDAnative.jl](https://​github.com/​JuliaGPU/​CUDAnative.jl).
 +
 +[ArrayFire.jl](https://​github.com/​JuliaComputing/​ArrayFire.jl) provides a high-level interface with little effort. ​ Be aware that it is only worth using when you have a large number of very similar and standard calculations for the CPU.
 +
 +[Flux.jl](https://​github.com/​FluxML/​Flux.jl) is in active development,​ with only lightweight abstractions.
 +
 +FIXME GPU operations example to be written
  
parallel-help.txt ยท Last modified: 2018/08/28 12:05 (external edit)