JuliaDiffEqHomepage for the JuliaDiffEq organization
http://juliadiffeq.org
DifferentialEquations.jl v6.8.0: Advanced Stiff Differential Equation Solving<p>This release covers the completion of another successful summer. We have now
completed a new round of tooling for solving large stiff and sparse differential
equations. Most of this is covered in the exciting….</p>
<h2 id="new-tutorial-solving-stiff-equations-for-advanced-users">New Tutorial: Solving Stiff Equations for Advanced Users!</h2>
<p>That is right, we now have a new tutorial added to the documentation on
<a href="http://docs.juliadiffeq.org/latest/tutorials/advanced_ode_example.html">solving stiff differential equations</a>.
This tutorial goes into depth, showing how to use our recent developments to
do things like automatically detect and optimize a solver with respect to
sparsity pattern, or automatically symbolically calculate a Jacobian from a
numerical code. This should serve as a great resource for the advanced users
who want to know how to get started with those finer details like sparsity
patterns and mass matrices.</p>
<h2 id="automatic-colorization-and-optimization-for-structured-matrices">Automatic Colorization and Optimization for Structured Matrices</h2>
<p>As showcased in the tutorial, if you have <code class="language-plaintext highlighter-rouge">jac_prototype</code> be a structured matrix,
then the <code class="language-plaintext highlighter-rouge">colorvec</code> is automatically computed, meaning that things like
<code class="language-plaintext highlighter-rouge">BandedMatrix</code> are now automatically optimized. The default linear solvers make
use of their special methods, meaning that DiffEq has full support for these
structured matrix objects in an optimal manner.</p>
<h2 id="implicit-extrapolation-and-parallel-dirk-for-stiff-odes">Implicit Extrapolation and Parallel DIRK for Stiff ODEs</h2>
<p>At the tail end of the summer, a set of implicit extrapolation methods were
completed. We plan to parallelize these over the next year, seeing what can
happen on small stiff ODEs if parallel W-factorizations are allowed.</p>
<h2 id="automatic-conversion-of-numerical-to-symbolic-code-with-modelingtoolkitize">Automatic Conversion of Numerical to Symbolic Code with Modelingtoolkitize</h2>
<p>This is just really cool and showcased in the new tutorial. If you give us a
function for numerically computing the ODE, we can now automatically convert
said function into a symbolic form in order to compute quantities like the
Jacobia and then build a Julia code for the generated Jacobian. Check out the
new tutorial if you’re curious, because although it sounds crazy… this is
now a standard feature!</p>
<h2 id="gpu-optimized-sparse-colored-automatic-and-finite-differentiation">GPU-Optimized Sparse (Colored) Automatic and Finite Differentiation</h2>
<p>SparseDiffTools.jl and DiffEqDiffTools.jl were made GPU-optimized, meaning that
the stiff ODE solvers now do not have a rate-limiting step at the Jacobian
construction.</p>
<h2 id="diffeqbiologicaljl-homotopy-continuation">DiffEqBiological.jl: Homotopy Continuation</h2>
<p>DiffEqBiological got support for automatic bifurcation plot generation by
connecting with HomotopyContinuation.jl. See <a href="https://github.com/JuliaDiffEq/DiffEqBiological.jl#making-bifurcation-diagram">the new tutorial</a></p>
<h2 id="greatly-improved-delay-differential-equation-solving">Greatly improved delay differential equation solving</h2>
<p>David Widmann (@devmotion) greatly improved the delay differential equation
solver’s implicit step handling, along with adding a bunch of tests to show
that it passes the special RADAR5 test suite!</p>
<h2 id="color-differentiation-integration-with-native-julia-de-solvers">Color Differentiation Integration with Native Julia DE Solvers</h2>
<p>The <code class="language-plaintext highlighter-rouge">ODEFunction</code>, <code class="language-plaintext highlighter-rouge">DDEFunction</code>, <code class="language-plaintext highlighter-rouge">SDEFunction</code>, <code class="language-plaintext highlighter-rouge">DAEFunction</code>, etc. constructors
now allow you to specify a color vector. This will reduce the number of <code class="language-plaintext highlighter-rouge">f</code>
calls required to compute a sparse Jacobian, giving a massive speedup to the
computation of a Jacobian and thus of an implicit differential equation solve.
The color vectors can be computed automatically using the SparseDiffTools.jl
library’s <code class="language-plaintext highlighter-rouge">matrix_colors</code> function. Thank JSoC student Langwen Huang
(@huanglangwen) for this contribution.</p>
<h2 id="improved-compile-times">Improved compile times</h2>
<p>Compile times should be majorly improved now thanks to work from David
Widmann (@devmotion) and others.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Our current development is very much driven by the ongoing GSoC/JSoC projects,
which is a good thing because they are outputting some really amazing results!</p>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>Stochastic delay differential equations</li>
</ul>
Thu, 07 Nov 2019 12:00:00 +0000
http://juliadiffeq.org/2019/11/07/ParallelStiff.html
http://juliadiffeq.org/2019/11/07/ParallelStiff.htmlDifferentialEquations.jl v6.7.0: GPU-based Ensembles and Automatic Sparsity<p>Let’s just jump right in! This time we have a bunch of new GPU tools and
sparsity handling.</p>
<h2 id="breaking-with-deprecations-diffeqgpu-gpu-based-ensemble-simulations">(Breaking with Deprecations) DiffEqGPU: GPU-based Ensemble Simulations</h2>
<p>The <code class="language-plaintext highlighter-rouge">MonteCarloProblem</code> interface received an overhaul. First of all, the
interface has been renamed to <code class="language-plaintext highlighter-rouge">Ensemble</code>. The changes are:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">MonteCarloProblem</code> -> <code class="language-plaintext highlighter-rouge">EnsembleProblem</code></li>
<li><code class="language-plaintext highlighter-rouge">MonteCarloSolution</code> -> <code class="language-plaintext highlighter-rouge">EnsembleSolution</code></li>
<li><code class="language-plaintext highlighter-rouge">MonteCarloSummary</code> -> <code class="language-plaintext highlighter-rouge">EnsembleSummary</code></li>
<li><code class="language-plaintext highlighter-rouge">num_monte</code> -> <code class="language-plaintext highlighter-rouge">trajectories</code></li>
</ul>
<p><strong>Specifying <code class="language-plaintext highlighter-rouge">parallel_type</code> has been deprecated</strong> and a deprecation warning is
thrown mentioning this. So don’t worry: your code will work but will give
warnings as to what to change. Additionally, <strong>the DiffEqMonteCarlo.jl package
is no longer necessary for any of this functionality</strong>.</p>
<p>Now, <code class="language-plaintext highlighter-rouge">solve</code> of a <code class="language-plaintext highlighter-rouge">EnsembleProblem</code> works on the same dispatch mechanism as the
rest of DiffEq, which looks like <code class="language-plaintext highlighter-rouge">solve(ensembleprob,Tsit5(),EnsembleThreads(),trajectories=n)</code>
where the third argument is an ensembling algorithm to specify the
threading-based form. Code with the deprecation warning will work until the
release of DiffEq 7.0, at which time the alternative path will be removed.</p>
<p>See the <a href="http://docs.juliadiffeq.org/latest/features/ensemble.html">updated ensembles page for more details</a></p>
<p>The change to dispatch was done for a reason: it allows us to build new libraries
specifically for sophisticated handling of many trajectory ODE solves without
introducing massive new dependencies to the standard DifferentialEquations.jl
user. However, many people might be interested in the first project to make
use of this: <a href="https://github.com/JuliaDiffEq/DiffEqGPU.jl">DiffEqGPU.jl</a>.
DiffEqGPU.jl lets you define a problem, like an <code class="language-plaintext highlighter-rouge">ODEProblem</code>, and then solve
thousands of trajectories in parallel using your GPU. The syntax looks like:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">monteprob</span> <span class="o">=</span> <span class="n">EnsembleProblem</span><span class="x">(</span><span class="n">my_ode_prob</span><span class="x">)</span>
<span class="n">solve</span><span class="x">(</span><span class="n">monteprob</span><span class="x">,</span><span class="n">Tsit5</span><span class="x">(),</span><span class="n">EnsembleGPUArray</span><span class="x">(),</span><span class="n">num_monte</span><span class="o">=</span><span class="mi">100_000</span><span class="x">)</span>
</code></pre></div></div>
<p>and it will return 100,000 ODE solves. <strong>We have seen between a 12x and 90x speedup
depending on the GPU of the test systems</strong>, meaning that this can be a massive
improvement for parameter space exploration on smaller systems of ODEs.
Currently there are a few limitations of this method, including that events
cannot be used, but those will be solved shortly. Additional methods for
GPU-based parameter parallelism are coming soon to the same interface. Also
planned are GPU-accelerated multi-level Monte Carlo methods for faster weak
convergence of SDEs.</p>
<p>Again, this is utilizing compilation tricks to take the user-defined <code class="language-plaintext highlighter-rouge">f</code>
and recompile it on the fly to a <code class="language-plaintext highlighter-rouge">.ptx</code> kernel, and generating kernel-optimized
array-based formulations of the existing ODE solvers</p>
<h2 id="automated-sparsity-detection">Automated Sparsity Detection</h2>
<p>Shashi Gowda (@shashigowda) implemented a sparsity detection algorithm which
digs through user-defined Julia functions with Cassette.jl to find out what
inputs influence the output. The basic version checks at a given trace, but
a more sophisticated version, which we are calling Concolic Combinatoric Analysis,
looks at all possible branch choices and utilizes this to conclusively build a
Jacobian whose sparsity pattern captures the possible variable interactions.</p>
<p>The nice part is that this functionality is very straightforward to use.
For example, let’s say we had the following function:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">function</span><span class="nf"> f</span><span class="x">(</span><span class="n">dx</span><span class="x">,</span><span class="n">x</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="k">in</span> <span class="mi">2</span><span class="o">:</span><span class="n">length</span><span class="x">(</span><span class="n">x</span><span class="x">)</span><span class="o">-</span><span class="mi">1</span>
<span class="n">dx</span><span class="x">[</span><span class="n">i</span><span class="x">]</span> <span class="o">=</span> <span class="n">x</span><span class="x">[</span><span class="n">i</span><span class="o">-</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="mi">2</span><span class="n">x</span><span class="x">[</span><span class="n">i</span><span class="x">]</span> <span class="o">+</span> <span class="n">x</span><span class="x">[</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="x">]</span>
<span class="k">end</span>
<span class="n">dx</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">=</span> <span class="o">-</span><span class="mi">2</span><span class="n">x</span><span class="x">[</span><span class="mi">1</span><span class="x">]</span> <span class="o">+</span> <span class="n">x</span><span class="x">[</span><span class="mi">2</span><span class="x">]</span>
<span class="n">dx</span><span class="x">[</span><span class="k">end</span><span class="x">]</span> <span class="o">=</span> <span class="n">x</span><span class="x">[</span><span class="k">end</span><span class="o">-</span><span class="mi">1</span><span class="x">]</span> <span class="o">-</span> <span class="mi">2</span><span class="n">x</span><span class="x">[</span><span class="k">end</span><span class="x">]</span>
<span class="nb">nothing</span>
<span class="k">end</span>
</code></pre></div></div>
<p>If we want to find out the sparsity pattern of <code class="language-plaintext highlighter-rouge">f</code>, we would simply call:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">sparsity_pattern</span> <span class="o">=</span> <span class="n">sparsity!</span><span class="x">(</span><span class="n">f</span><span class="x">,</span><span class="n">output</span><span class="x">,</span><span class="n">input</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span>
</code></pre></div></div>
<p>where <code class="language-plaintext highlighter-rouge">output</code> is an array like <code class="language-plaintext highlighter-rouge">dx</code>, <code class="language-plaintext highlighter-rouge">input</code> is an array like <code class="language-plaintext highlighter-rouge">x</code>, <code class="language-plaintext highlighter-rouge">p</code>
are possible parameters, and <code class="language-plaintext highlighter-rouge">t</code> is a possible <code class="language-plaintext highlighter-rouge">t</code>. The function will then
be analyzed and <code class="language-plaintext highlighter-rouge">sparsity_pattern</code> will return a <code class="language-plaintext highlighter-rouge">Sparsity</code> type of <code class="language-plaintext highlighter-rouge">I</code> and <code class="language-plaintext highlighter-rouge">J</code>
which denotes the terms in the Jacobian with non-zero elements. By doing
<code class="language-plaintext highlighter-rouge">sparse(sparsity_pattern)</code> we can turn this into a <code class="language-plaintext highlighter-rouge">SparseMatrixCSC</code> with the
correct sparsity pattern.</p>
<p>This functionality highlights the power of Julia since there is no way to
conclusively determine the Jacobian of an arbitrary program <code class="language-plaintext highlighter-rouge">f</code> using numerical
techniques, since all sorts of scenarios lead to “fake zeros” (cancelation,
not checking a place in parameter space where a branch is false, etc.). However,
by directly utilizing Julia’s compiler and the SSA provided by a Julia function
definition we can perform a non-standard interpretation that tells all of the
possible numerical ways the program can act, thus conclusively determining
all of the possible variable interactions.</p>
<p>Of course, you can still specify analytical Jacobians and sparsity patterns
if you want, but if you’re lazy… :)</p>
<p>See <a href="https://github.com/JuliaDiffEq/SparsityDetection.jl">SparsityDetection.jl’s README for more details</a>.</p>
<h2 id="gpu-offloading-in-implicit-de-solving">GPU Offloading in Implicit DE Solving</h2>
<p>We are pleased to announce the <code class="language-plaintext highlighter-rouge">LinSolveGPUFactorize</code> option which allows for
automatic offloading of linear solves to the GPU. For a problem with a large
enough dense Jacobian, using <code class="language-plaintext highlighter-rouge">linsolve=LinSolveGPUFactorize()</code> will now
automatically perform the factorization and back-substitution on the GPU,
allowing for better scaling. For example:</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">CuArrays</span>
<span class="n">Rodas5</span><span class="x">(</span><span class="n">linsolve</span> <span class="o">=</span> <span class="n">LinSolveGPUFactorize</span><span class="x">())</span>
</code></pre></div></div>
<p>This simply requires a working installation of CuArrays.jl. See
<a href="http://docs.juliadiffeq.org/latest/features/linear_nonlinear.html">the linear solver documentation for more details</a>.</p>
<h2 id="experimental-automated-accelerator-gpu-offloading">Experimental: Automated Accelerator (GPU) Offloading</h2>
<p>We have been dabbling in allowing automated accelerator (GPU, multithreading,
distributed, TPU, etc.) offloading when the right hardware is detected and the
problem size is sufficient to success a possible speedup.
<a href="https://github.com/JuliaDiffEq/DiffEqBase.jl/pull/273">A working implementation exists as a PR for DiffEqBase</a>
which would allow automated acceleration of linear solves in implicit DE solving.
However, this somewhat invasive of a default, and very architecture dependent,
so it is unlikely we will be releasing this soon. However, we are investigating
this concept in more detail in the <a href="https://github.com/JuliaDiffEq/AutoOffload.jl">AutoOffload.jl</a>. If you’re interested in Julia-wide automatic acceleration,
please take a look at the repo and help us get something going!</p>
<h2 id="a-complete-set-of-iterative-solver-routines-for-implicit-des">A Complete Set of Iterative Solver Routines for Implicit DEs</h2>
<p>Previous releases had only a pre-built GMRES implementation. However, as
detailed on the <a href="http://docs.juliadiffeq.org/latest/features/linear_nonlinear.html#IterativeSolvers.jl-Based-Methods-1">linear solver page</a>,
we now have an array of iterative solvers readily available, including:</p>
<ul>
<li>LinSolveGMRES – GMRES</li>
<li>LinSolveCG – CG (Conjugate Gradient)</li>
<li>LinSolveBiCGStabl – BiCGStabl Stabilized Bi-Conjugate Gradient</li>
<li>LinSolveChebyshev – Chebyshev</li>
<li>LinSolveMINRES – MINRES</li>
</ul>
<p>These are all compatible with matrix-free implementations of a
<code class="language-plaintext highlighter-rouge">AbstractDiffEqOperator</code>.</p>
<h2 id="exponential-integrator-improvements">Exponential integrator improvements</h2>
<p>Thanks to Yingbo Ma (@YingboMa), the exprb methods have been greatly improved.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Our current development is very much driven by the ongoing GSoC/JSoC projects,
which is a good thing because they are outputting some really amazing results!</p>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Surrogate optimization</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>High Strong Order Methods for Non-Commutative Noise SDEs</li>
<li>GPU-Optimized Sparse (Colored) Automatic Differentiation</li>
<li>Parallelized Implicit Extrapolation of ODEs</li>
</ul>
Fri, 05 Jul 2019 12:00:00 +0000
http://juliadiffeq.org/2019/07/05/AutomaticSparsity.html
http://juliadiffeq.org/2019/07/05/AutomaticSparsity.htmlDifferentialEquations.jl v6.6.0: Sparse Jacobian Coloring, Quantum Computer ODE Solvers, and Stiff SDEs<h2 id="sparsity-performance-jacobian-coloring-with-numerical-and-forward-differentiation">Sparsity Performance: Jacobian coloring with numerical and forward differentiation</h2>
<p>If you have a function <code class="language-plaintext highlighter-rouge">f!(du,u)</code> which has a Tridiagonal Jacobian, you could
calculate that Jacobian by mixing perturbations. For example, instead of doing
<code class="language-plaintext highlighter-rouge">u .+ [epsilon,0,0,0,0,0,0,0,...]</code>, you’d do <code class="language-plaintext highlighter-rouge">u .+ [epsilon,0,0,epsilon,0,0,...]</code>.
Because the <code class="language-plaintext highlighter-rouge">epsilons</code> will never overlap, you can then decode this “compressed”
Jacobian into the sparse form. Do that 3 times and boom, full Jacobian in
4 calls to <code class="language-plaintext highlighter-rouge">f!</code> no matter the size of <code class="language-plaintext highlighter-rouge">u</code>! Without a color vector, this matrix
would take <code class="language-plaintext highlighter-rouge">1+length(u)</code> <code class="language-plaintext highlighter-rouge">f!</code> calls, so I’d say that’s a pretty good speedup.</p>
<p>This is called Jacobian coloring. <code class="language-plaintext highlighter-rouge">[1,2,3,1,2,3,1,2,3,...]</code> are the colors in
this example, and places with the same color can be differentiated simultaneously.
Now, the DiffEqDiffTools.jl internals allow for passing a color vector into the
numerical differentiation libraries and automatically decompressing into a
sparse Jacobian. This means that DifferentialEquations.jl will soon be compatible
with this dramatic speedup technique. In addition, other libraries in Julia with
rely on our utility libraries, like Optim.jl, could soon make good use of this.</p>
<p>What if you don’t know a good color vector for your Jacobian? No sweat! The
soon to be released SparseDiffTools.jl repository has methods for automatically
generating color vectors using heuristic graphical techniques.
DifferentialEquations.jl will soon make use of this automatically if you specify
a sparse matrix for your Jacobian!</p>
<p>Note that the SparseDiffTools.jl repository also includes functions for calculating
the sparse Jacobians using color vectors and forward-mode automatic differentiation
(using Dual numbers provided by ForwardDiff.jl). In this case, the number of Dual
partials is equal to the number of colors, which can be dramatically lower than
the <code class="language-plaintext highlighter-rouge">length(u)</code> (the dense default!), thereby dramatically reducing compile
and run time.</p>
<p>Stay tuned for the next releases which begin to auto-specialize everything
along the way based on sparsity structure. Thanks to JSoC student Pankaj (@pkj-m)
for this work.</p>
<h2 id="higher-weak-order-srock-methods-for-stiff-sdes">Higher weak order SROCK methods for stiff SDEs</h2>
<p>Deepesh Thakur (@deeepeshthakur) continues his roll with stiff stochastic
differential equation solvers by implementing not 1 but 7 new high weak order
stiff SDE solvers. SROCK1 with generalized noise, SKSROCK, and a bunch of
variants of SROCK2. Benchmark updates will come soon, but I have a feeling
that these new methods may be by far the most stable methods in the library,
and the ones which achieve the lowest error in the mean solution most efficiently.</p>
<h2 id="diffeqbot">DiffEqBot</h2>
<p>GSoC student Kanav Gupta (@kanav99) implemented a bot for the JuliaDiffEq
team that allows us to run performance regression benchmarks on demand with
preset Gitlab runners. Right now this has a dedicated machine for CPU and
parallelism performance testing, and soon we’ll have a second machine
up and running for performance testing on GPUs. If you haven’t seen the Julialang
blog post on this topic, <a href="https://julialang.org/blog/2019/06/diffeqbot">please check it out!</a>.</p>
<h2 id="quantum-ode-solver-qulde">Quantum ODE Solver QuLDE</h2>
<p>If you happen to have a quantum computer handy, hold your horses. <code class="language-plaintext highlighter-rouge">QuLDE</code> from
QuDiffEq.jl is an ODE solver designed for quantum computers. It utilizes the
Yao.jl quantum circuit simulator to run, but once Yao.jl supports QASM then
this will compile to something compatible with (future) quantum computing
hardware. This means that, in order to enter the new age of computing, all
you have to do is change <code class="language-plaintext highlighter-rouge">solve(prob,Tsit5())</code> to <code class="language-plaintext highlighter-rouge">solve(prob,QuLDE())</code> and you’re
there. Is it practical? Who knows (please let us know). Is it cool? Oh yeah!</p>
<p>See <a href="https://nextjournal.com/dgan181/julia-soc-19-quantum-algorithms-for-differential-equations">the quantum ODE solver blog post for more details</a>.</p>
<h2 id="commutative-noise-gpu-compatibility">Commutative Noise GPU compatibility</h2>
<p>The commutative noise SDE solvers are now GPU-compatible thanks to GSoC student
Deepesh Thakur (@deeepeshthakur). The next step will be to implement high order
non-commutative noise SDE solvers and the associated iterated integral
approximations in a manner that is GPU-compatible.</p>
<h2 id="new-benchmark-and-tutorial-repository-setups">New benchmark and tutorial repository setups</h2>
<p>DiffEqBenchmarks.jl and DiffEqTutorials.jl are now fully updated to a Weave.jl
form. We still need to fix up a few benchmarks, but it’s in a state that is ready
for new contributions.</p>
<h2 id="optimized-multithreaded-extrapolation">Optimized multithreaded extrapolation</h2>
<p>The GBS extrapolation methods have gotten optimized, and they now are the one
of the most efficient methods at lower tolerances of the Float64 range for
non-stiff ODEs:</p>
<p><img src="https://user-images.githubusercontent.com/1814174/59899185-d56a5e80-93c1-11e9-86a0-ea09bfaa59ed.png" alt="non-stiff extrapolation" /></p>
<p>Thank you to Konstantin Althaus (@AlthausKonstantin) for contributing the first
version of this algorithm and GSoC student Saurabh Agarwal (@saurabhkgp21) for
adding automatic parallelization of the method.</p>
<p>This method will soon see improvements as multithreading will soon be improved
in Julia v1.2. The new PARTR features will allow our internal <code class="language-plaintext highlighter-rouge">@threads</code> loop
to perform dynamic work-stealing which will definitely be a good improvement to
the current parallelism structure. So stay tuned: this will likely benchmark
even better in a few months.</p>
<h2 id="fully-non-allocating-exp-in-exponential-integrators">Fully non-allocating exp! in exponential integrators</h2>
<p>Thanks to Yingbo Ma (@YingboMa) for making the internal <code class="language-plaintext highlighter-rouge">exp</code> calls of the
exponential integrators non-allocating. Continued improvements to this category
of methods is starting to show promise in the area of semilinear PDEs.</p>
<h2 id="rosenbrock-w-methods">Rosenbrock-W methods</h2>
<p>JSoC student Langwen Huang (@huanglangwen) has added the Rosenbrock-W class of
methods to OrdinaryDiffEq.jl. These methods are like the Rosenbrock methods
but are able to reuse their W matrix for multiple steps, allowing the method
to scale to larger ODEs more efficiently. Since the Rosenbrock methods
benchmark as the fastest methods for small ODEs right now, this is an exciting
new set of methods which will get optimized over the course of the summer.
Efficient Jacobian reuse techniques and the ability to utilize the sparse
differentiation tooling are next on this project.</p>
<h1 id="next-directions">Next Directions</h1>
<p>Our current development is very much driven by the ongoing GSoC/JSoC projects,
which is a good thing because they are outputting some really amazing results!</p>
<p>Here’s some things to look forward to:</p>
<ul>
<li>Higher order SDE methods for non-commutative noise</li>
<li>Parallelized methods for stiff ODEs</li>
<li>Integration of sparse colored differentiation into the differential equation solvers</li>
<li>Jacobian reuse efficiency in Rosenbrock-W methods</li>
<li>Exponential integrator improvements</li>
<li>Native Julia fully implicit ODE (DAE) solving in OrdinaryDiffEq.jl</li>
<li>Automated matrix-free finite difference PDE operators</li>
<li>Surrogate optimization</li>
<li>GPU-based Monte Carlo parallelism</li>
</ul>
Mon, 24 Jun 2019 12:00:00 +0000
http://juliadiffeq.org/2019/06/24/coloring.html
http://juliadiffeq.org/2019/06/24/coloring.htmlDifferentialEquations.jl v6.5.0: Stiff SDEs, VectorContinuousCallback, Multithreaded Extrapolation<p>Well, we zoomed towards this one. In this release we have a lot of very compelling
new features for performance in specific domains. Large ODEs, stiff SDEs, high
accuracy ODE solving, many callbacks, etc. are all specialized on and greatly
improved in this PR.</p>
Thu, 06 Jun 2019 12:00:00 +0000
http://juliadiffeq.org/2019/06/06/StiffSDEs.html
http://juliadiffeq.org/2019/06/06/StiffSDEs.htmlDifferentialEquations.jl v6.4.0: Full GPU ODE, Performance, ModelingToolkit<p>This is a huge release. We should take the time to thank every contributor
to the JuliaDiffEq package ecosystem. A lot of this release focuses on performance
features. The ability to use stiff ODE solvers on the GPU, with automated
tooling for matrix-free Newton-Krylov, faster broadcast, better Jacobian
re-use algorithms, memory use reduction, etc. All of these combined give some
pretty massive performance boosts in the area of medium to large sized highly
stiff ODE systems. In addition, numerous robustness fixes have enhanced the
usability of these tools, along with a few new features like an implementation
of extrapolation for ODEs and the release of ModelingToolkit.jl.</p>
<p>Let’s start by summing up this release with an example.</p>
<h3 id="comprehensive-example">Comprehensive Example</h3>
<p>Here’s a nice showcase of DifferentialEquations.jl: Neural ODE with batching on
the GPU (without internal data transfers) with high order adaptive implicit ODE
solvers for stiff equations using matrix-free Newton-Krylov via preconditioned
GMRES and trained using checkpointed adjoint equations. Few programs work
directly with neural networks and allow for batching, few utilize GPUs, few
have methods applicable to highly stiff equations, few allow for large stiff
equations via matrix-free Newton-Krylov, and finally few have checkpointed
adjoints. This is all done in a high level programming language. What does the
code for this look like?</p>
<div class="language-julia highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">OrdinaryDiffEq</span><span class="x">,</span> <span class="n">Flux</span><span class="x">,</span> <span class="n">DiffEqFlux</span><span class="x">,</span> <span class="n">DiffEqOperators</span><span class="x">,</span> <span class="n">CuArrays</span>
<span class="n">x</span> <span class="o">=</span> <span class="kt">Float32</span><span class="x">[</span><span class="mf">2.</span><span class="x">;</span> <span class="mf">0.</span><span class="x">]</span><span class="o">|></span><span class="n">gpu</span>
<span class="n">tspan</span> <span class="o">=</span> <span class="kt">Float32</span><span class="o">.</span><span class="x">((</span><span class="mf">0.0f0</span><span class="x">,</span><span class="mf">25.0f0</span><span class="x">))</span>
<span class="n">dudt</span> <span class="o">=</span> <span class="n">Chain</span><span class="x">(</span><span class="n">Dense</span><span class="x">(</span><span class="mi">2</span><span class="x">,</span><span class="mi">50</span><span class="x">,</span><span class="n">tanh</span><span class="x">),</span><span class="n">Dense</span><span class="x">(</span><span class="mi">50</span><span class="x">,</span><span class="mi">2</span><span class="x">))</span><span class="o">|></span><span class="n">gpu</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">destructure</span><span class="x">(</span><span class="n">dudt</span><span class="x">)</span>
<span class="n">dudt_</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="o">::</span><span class="n">TrackedArray</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span> <span class="o">=</span> <span class="n">du</span> <span class="o">.=</span> <span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">restructure</span><span class="x">(</span><span class="n">dudt</span><span class="x">,</span><span class="n">p</span><span class="x">)(</span><span class="n">u</span><span class="x">)</span>
<span class="n">dudt_</span><span class="x">(</span><span class="n">du</span><span class="x">,</span><span class="n">u</span><span class="o">::</span><span class="kt">AbstractArray</span><span class="x">,</span><span class="n">p</span><span class="x">,</span><span class="n">t</span><span class="x">)</span> <span class="o">=</span> <span class="n">du</span> <span class="o">.=</span> <span class="n">Flux</span><span class="o">.</span><span class="n">data</span><span class="x">(</span><span class="n">DiffEqFlux</span><span class="o">.</span><span class="n">restructure</span><span class="x">(</span><span class="n">dudt</span><span class="x">,</span><span class="n">p</span><span class="x">)(</span><span class="n">u</span><span class="x">))</span>
<span class="n">ff</span> <span class="o">=</span> <span class="n">ODEFunction</span><span class="x">(</span><span class="n">dudt_</span><span class="x">,</span><span class="n">jac_prototype</span> <span class="o">=</span> <span class="n">JacVecOperator</span><span class="x">(</span><span class="n">dudt_</span><span class="x">,</span><span class="n">x</span><span class="x">))</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">ODEProblem</span><span class="x">(</span><span class="n">ff</span><span class="x">,</span><span class="n">x</span><span class="x">,</span><span class="n">tspan</span><span class="x">,</span><span class="n">p</span><span class="x">)</span>
<span class="n">diffeq_adjoint</span><span class="x">(</span><span class="n">p</span><span class="x">,</span><span class="n">prob</span><span class="x">,</span><span class="n">KenCarp4</span><span class="x">(</span><span class="n">linsolve</span><span class="o">=</span><span class="n">LinSolveGMRES</span><span class="x">());</span><span class="n">u0</span><span class="o">=</span><span class="n">x</span><span class="x">,</span>
<span class="n">saveat</span><span class="o">=</span><span class="mf">0.0</span><span class="o">:</span><span class="mf">0.1</span><span class="o">:</span><span class="mf">25.0</span><span class="x">,</span><span class="n">backsolve</span><span class="o">=</span><span class="nb">false</span><span class="x">)</span>
</code></pre></div></div>
<p>That is 10 lines of code, and we can continue to make it even more succinct.</p>
<p>Now, onto the release highlights.</p>
<h2 id="full-gpu-support-in-ode-solvers">Full GPU Support in ODE Solvers</h2>
<p>Now not just the non-stiff ODE solvers but the stiff ODE solvers allow for
the initial condition to be a GPUArray, with the internal methods not
performing any indexing in order to allow for all computations to take place
on the GPU without data transfers. This allows for expensive right-hand side
calculations, like those in neural ODEs or PDE discretizations, to utilize
GPU acceleration without worrying about whether the cost of data
transfers will overtake the solver speed enhancements.</p>
<p>While the presence of broadcast throughout the solvers might worry one about
performance…</p>
<h2 id="fast-diffeq-specific-broadcast">Fast DiffEq-Specific Broadcast</h2>
<p>Yingbo Ma (@YingboMa) implemented a fancy broadcast wrapper that allows for
all sorts of information to be passed to the compiler in the differential
equation solver’s internals, making a bunch of no-aliasing and sizing assumptions
that are normally not possible. These change the internals to all use a
special <code class="language-plaintext highlighter-rouge">@..</code> which turns out to be faster than standard loops, and this is the
magic that really enabled the GPU support to happen without performance
regressions (and in fact, we got some speedups from this, close to 2x in some
cases!)</p>
<h2 id="smart-linsolve-defaults-and-linsolvegmres">Smart linsolve defaults and LinSolveGMRES</h2>
<p>One of the biggest performance-based features to be released is smarter linsolve
defaults. If you are using dense arrays with a standard Julia build, OpenBLAS
does not perform recursive LU factorizations which we found to be suboptimal
by about 5x in some cases. Thus our default linear solver now automatically
detects the BLAS installation and utilizes RecursiveFactorizations.jl to give
this speedup for many standard stiff ODE cases. In addition, if you passed a
sparse Jacobian for the <code class="language-plaintext highlighter-rouge">jac_prototype</code>, the linear solver now automatically
switches to a form that works for sparse Jacobians. If you use an
<code class="language-plaintext highlighter-rouge">AbstractDiffEqOperator</code>, the default linear solver automatically switches to
a Krylov subspace method (GMRES) and utilizes the matrix-free operator directly.
Banded matrices and Jacobians on the GPU are now automatically handled as well.</p>
<p>Of course, that’s just the defaults, and most of this was possible before but
now has just been made more accessible. In addition to these, the ability to
easily switch to GMRES was added via <code class="language-plaintext highlighter-rouge">LinSolveGMRES</code>. Just add
<code class="language-plaintext highlighter-rouge">linsolve = LinSolveGMRES()</code> to any native Julia algorithm with a swappable
linear solver and it’ll switch to using GMRES. In this you can pass options
for preconditioners and tolerances as well. We will continue to integrate this
better into our integrators as doing so will enhance the efficiency when
solving large sparse systems.</p>
<h2 id="automated-jv-products-via-autodifferentiation">Automated J*v Products via Autodifferentiation</h2>
<p>When using <code class="language-plaintext highlighter-rouge">GMRES</code>, one does not need to construct the full Jacobian matrix.
Instead, one can simply use the directional derivatives in the direction of
<code class="language-plaintext highlighter-rouge">v</code> in order to compute <code class="language-plaintext highlighter-rouge">J*v</code>. This has now been put into an operator form
via <code class="language-plaintext highlighter-rouge">JacVecOperator(dudt_,x)</code>, so now users can directly ask for this to
occur using one line. It allows for the use of autodifferentiation or
numerical differentiation to calculate the <code class="language-plaintext highlighter-rouge">J*v</code>.</p>
<h2 id="destats">DEStats</h2>
<p>One of the nichest but nicest new features is DEStats. If you do <code class="language-plaintext highlighter-rouge">sol.destats</code>
then you will see a load of information on how many steps were taken, how many
<code class="language-plaintext highlighter-rouge">f</code> calls were done, etc. giving a broad overview of the performance of the
algorithm. Thanks to Kanav Gupta (@kanav99) and Yingbo Ma (@YingboMa) for really
driving this feature since it has allowed for a lot of these optimizations to
be more thoroughly investigated. You can expect DiffEq development to
accelerate with this information!</p>
<h2 id="improved-jacobian-reuse">Improved Jacobian Reuse</h2>
<p>One of the things which was noticed using DEStats was that the amount of Jacobians
and inversions that were being calculated could be severly reduced. Yingbo Ma (@YingboMa)
did just that, greatly increasing the performance of all implicit methods like
<code class="language-plaintext highlighter-rouge">KenCarp4</code> showing cases in the 1000+ range where OrdinaryDiffEq’s native
methods outperformed Sundials CVODE_BDF. This still has plenty of room for
improvement.</p>
<h2 id="diffeqbiological-performance-improvements-for-large-networks-speed-and-sparsity">DiffEqBiological performance improvements for large networks (speed and sparsity)</h2>
<p>Samuel Isaacson (@isaacson) has been instrumental in improving DiffEqBiological.jl
and its ability to handle large reaction networks. It can now parse the networks
much faster and can build Jacobians which utilize sparse matrices. It pairs
with his ParseRxns(???) library and has been a major source of large stiff
test problems!</p>
<h2 id="partial-neural-odes-batching-and-gpu-fixes">Partial Neural ODEs, Batching and GPU Fixes</h2>
<p>We now have working examples of partial neural differential equations, which
are equations which have pre-specified portions that are known while others
are learnable neural networks. These also allow for batched data and GPU
acceleration. Not much else to say except let your neural diffeqs go wild!</p>
<h2 id="low-memory-rk-optimality-and-alias_u0">Low Memory RK Optimality and Alias_u0</h2>
<p>Kanav Gupta (@kanav99) and Hendrik Ranocha (@ranocha) did amazing jobs at doing memory optimizations of
low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs.
Essentially these methods have a minimal number of registers which are
theoretically required for the method. Kanav added some tricks to the implementation
(using a fun <code class="language-plaintext highlighter-rouge">=</code> -> <code class="language-plaintext highlighter-rouge">+=</code> overload idea) and Henrick added the <code class="language-plaintext highlighter-rouge">alias_u0</code> argument
to allow for using the passed in initial condition as one of the registers. Unit
tests confirm that our implementations achieve the minimum possible number of
registers, allowing for large PDE discretizations to make use of
DifferentialEquations.jl without loss of memory efficiency. We hope to see
this in use in some large-scale simulation software!</p>
<h2 id="more-robust-callbacks">More Robust Callbacks</h2>
<p>Our <code class="language-plaintext highlighter-rouge">ContinuousCallback</code> implementation now has increased robustness in double
event detection, using a new strategy. Try to break it.</p>
<h2 id="gbs-extrapolation">GBS Extrapolation</h2>
<p>New contributor Konstantin Althaus (@AlthausKonstantin) implemented midpoint
extrapolation methods for ODEs using Barycentric formulas and different a
daptivity behaviors. We will be investigating these methods for their
parallelizability via multithreading in the context of stiff and non-stiff ODEs.</p>
<h2 id="modelingtoolkitjl-release">ModelingToolkit.jl Release</h2>
<p>ModelingToolkit.jl has now gotten some form of a stable release. A lot of credit
goes to Harrison Grodin (@HarrisonGrodin). While it has
already been out there and found quite a bit of use, it has really picked up
steam over the last year as a modeling framework suitable for the flexibility
DifferentialEquations.jl. We hope to continue its development and add features
like event handling to its IR.</p>
<h2 id="sundials-jv-interface-stats-and-preconditioners">SUNDIALS J*v interface, stats, and preconditioners</h2>
<p>While we are phasing out Sundials from our standard DifferentialEquations.jl
practice, the Sundials.jl continues to improve as we add more features to
benchmark against. Sundials’ J*v interface has now been exposed, so adding a
DiffEqOperator to the <code class="language-plaintext highlighter-rouge">jac_prototype</code> will work with Sundials. <code class="language-plaintext highlighter-rouge">DEStats</code> is
hooked up to Sundials, and now you can pass preconditioners to its internal
Newton-Krylov methods.</p>
<h1 id="next-directions">Next Directions</h1>
<ul>
<li>Improved nonlinear solvers for stiff SDE handling</li>
<li>More adaptive methods for SDEs</li>
<li>Better boundary condition handling in DiffEqOperators.jl</li>
<li>More native implicit ODE (DAE) solvers</li>
<li>Adaptivity in the MIRK BVP solvers</li>
<li>LSODA integrator interface</li>
<li>Improved BDF</li>
</ul>
Thu, 09 May 2019 13:00:00 +0000
http://juliadiffeq.org/2019/05/09/GPU.html
http://juliadiffeq.org/2019/05/09/GPU.htmlDifferentialEquations.jl 6.0: Radau5, Hyperbolic PDEs, Dependency Reductions<p>This marks the release of DifferentialEquations.jl v6.0.0. Here’s a low down
of what has happened in the timeframe.</p>
Sat, 02 Feb 2019 10:00:00 +0000
http://juliadiffeq.org/2019/02/02/RadauAnderson.html
http://juliadiffeq.org/2019/02/02/RadauAnderson.htmlDifferentialEquations.jl 5.0: v1.0, Jacobian Types, EPIRK<p>This marks the release of DifferentialEquations.jl. There will be an accompanying
summary blog post which goes into more detail about our current state and sets
the focus for the organization’s v6.0 release. However, for now I would like
to describe some of the large-scale changes which have been included in this
release. Much thanks goes to the Google Summer of Code students who heavily
contributed to these advances.</p>
Mon, 20 Aug 2018 10:00:00 +0000
http://juliadiffeq.org/2018/08/20/FunctionInputEPIRK.html
http://juliadiffeq.org/2018/08/20/FunctionInputEPIRK.htmlDifferentialEquations.jl 4.6: Global Sensitivity Analysis, Variable Order Adams<p>Tons of improvements due to Google Summer of Code. Here’s what’s happened.</p>
Thu, 05 Jul 2018 10:00:00 +0000
http://juliadiffeq.org/2018/07/05/GSAVariableOrder.html
http://juliadiffeq.org/2018/07/05/GSAVariableOrder.htmlDifferentialEquations.jl 4.5: ABC, Adaptive Multistep, Maximum A Posteriori<p>Once again we stayed true to form and didn’t solve the problems in the
development list but adding a ton of new features anyways. Now that Google
Summer of Code (GSoC) is in full force, a lot of these updates are due to
our very awesome and productive students. Here’s what we got.</p>
Sat, 26 May 2018 10:00:00 +0000
http://juliadiffeq.org/2018/05/26/ABCMore.html
http://juliadiffeq.org/2018/05/26/ABCMore.htmlA "Jupyter" of DiffEq: Introducing Python and R Bindings for DifferentialEquations.jl<p>Differential equations are used for modeling throughout the sciences from astrophysical calculations to simulations of biochemical interactions. These models have to be simulated numerically due to the complexity of the resulting equations. However, numerical solving differential equations presents interesting software engineering challenges. On one hand, speed is of utmost importance. PDE discretizations quickly turn into ODEs that take days/weeks/months to solve, so reducing time by 5x or 10x can be the difference between a doable and an impractical computation. But these methods are difficult to optimize in a higher level language since a lot of the computations are small, hard to vectorize loops with a user-defined function directly in the middle (one SciPy developer described it as a <a href="https://github.com/scipy/scipy/pull/6326#issuecomment-336877517">“worst case scenario for Python”</a>) . Thus higher level languages and problem-solving environments have resorted to a strategy of wrapping C++ and Fortran packages, and as described in a survey of differential equation solving suites, <a href="http://www.stochasticlifestyle.com/comparison-differential-equation-solver-suites-matlab-r-julia-python-c-fortran/">most differential equation packages are wrapping the same few methods</a>.</p>
Mon, 30 Apr 2018 09:00:00 +0000
http://juliadiffeq.org/2018/04/30/Jupyter.html
http://juliadiffeq.org/2018/04/30/Jupyter.htmlDifferentialEquations.jl 4.4: Enhanced Stability and IMEX SDE Integrators<p>These are features long hinted at. The
<a href="https://arxiv.org/abs/1804.04344">Arxiv paper</a> is finally up and the new
methods from that paper are the release. In this paper I wanted to “complete”
the methods for additive noise and attempt to start enhancing the methods for
diagonal noise SDEs. Thus while it focuses on a constrained form of noise, this
is a form of noise present in a lot of models and, by using the constrained form,
allows for extremely optimized methods. See the
<a href="http://docs.juliadiffeq.org/latest/solvers/sde_solve.html">updated SDE solvers documentation</a>
for details on the new methods. Here’s what’s up!</p>
Sun, 15 Apr 2018 08:00:00 +0000
http://juliadiffeq.org/2018/04/15/StableSDE.html
http://juliadiffeq.org/2018/04/15/StableSDE.htmlDifferentialEquations.jl 4.3: Automatic Stiffness Detection and Switching<p>Okay, this is a quick release. However, There’s so much good stuff coming out
that I don’t want them to overlap and steal each other’s thunder! This release
has two long awaited features for increasing the ability to automatically solve
difficult differential equations with less user input.</p>
Mon, 09 Apr 2018 08:00:00 +0000
http://juliadiffeq.org/2018/04/09/AutoSwitch.html
http://juliadiffeq.org/2018/04/09/AutoSwitch.htmlDifferentialEquations.jl 4.2: Krylov Exponential Integrators, Non-Diagonal Adaptive SDEs, Tau-Leaping<p>This is a jam packed release. A lot of new integration methods were developed
in the last month to address specific issues of community members. Some of these
methods are one of a kind!</p>
Sat, 31 Mar 2018 10:00:00 +0000
http://juliadiffeq.org/2018/03/31/AdaptiveLowSDE.html
http://juliadiffeq.org/2018/03/31/AdaptiveLowSDE.htmlDifferentialEquations.jl 4.1: New ReactionDSL and KLU Sundials<p>Alright, that syntax change was painful but now everything seems to have
calmed down. We thank everyone for sticking with us and helping file issues
as necessary. It seems most people have done the syntax update and now we’re
moving on. In this release we are back to our usual and focused on feature
updates. There are changes, but we can once again be deprecating any of our
changes so that’s much easier on users.</p>
Sat, 17 Feb 2018 10:00:00 +0000
http://juliadiffeq.org/2018/02/17/Reactions.html
http://juliadiffeq.org/2018/02/17/Reactions.htmlDifferentialEquations.jl 4.0: Breaking Syntax Changes, Adjoint Sensitivity, Bayesian Estimation, and ETDRK4<p>In this release we have a big exciting breaking change to our API. We are taking
a “now or never” approach to fixing all of the API cruft we’ve gathered as we’ve
expanded to different domains. Now that we cover the space of problems we wish
to solve, we realize many inconsistencies we’ve introduced in our syntax.
Instead of keeping them, we’ve decided to do a breaking change to fix these
problems.</p>
Wed, 24 Jan 2018 07:30:00 +0000
http://juliadiffeq.org/2018/01/24/Parameters.html
http://juliadiffeq.org/2018/01/24/Parameters.htmlDifferentialEquations.jl 3.4: Sundials 3.1, ARKODE, Static Arrays<p>In this release we have a big exciting breaking change to Sundials and some
performance increases.</p>
Mon, 15 Jan 2018 11:30:00 +0000
http://juliadiffeq.org/2018/01/15/Sundials.html
http://juliadiffeq.org/2018/01/15/Sundials.htmlDifferentialEquations.jl 3.3: IMEX Solvers<p>What’s a better way to ring in the new year than to announce new features?
This ecosystem 3.3 release we have a few exciting developments, and at the
top of the list is new IMEX schemes. Let’s get right to it.</p>
Mon, 01 Jan 2018 11:30:00 +0000
http://juliadiffeq.org/2018/01/01/IMEX.html
http://juliadiffeq.org/2018/01/01/IMEX.htmlDifferentialEquations.jl 3.2: Expansion of Event Compatibility<p>DifferentialEquations.jl 3.2 is just a nice feature update. This hits a few
long requested features.</p>
Mon, 11 Dec 2017 00:30:00 +0000
http://juliadiffeq.org/2017/12/11/Events.html
http://juliadiffeq.org/2017/12/11/Events.htmlDifferentialEquations.jl 3.1: Jacobian Passing<p>The DifferentialEquations.jl 3.0 release had most of the big features and was
<a href="http://www.stochasticlifestyle.com/differentialequations-jl-3-0-roadmap-4-0/">featured in a separate blog post</a>.
Now in this release we had a few big incremental developments. We expanded
the capabilities of our wrapped libraries and completed one of the most
requested features: passing Jacobians into the IDA and DASKR DAE solvers.
Let’s just get started there:</p>
Fri, 24 Nov 2017 01:30:00 +0000
http://juliadiffeq.org/2017/11/24/Jacobians.html
http://juliadiffeq.org/2017/11/24/Jacobians.htmlStiff SDE and DDE Solvers<p>The end of the summer cycle means that many things, including Google Summer of
Code projects, are being released. A large part of the current focus has been to
develop tools to make solving PDEs easier, and also creating efficient tools
for generalized stiff differential equations. I think we can claim to be one of
the first libraries to include methods for stiff SDEs, one of the first for stiff
DDEs, and one of the first to include higher order adaptive Runge-Kutta Nystrom
schemes. And that’s not even looking at a lot of the more unique stuff in this
release. Take a look.</p>
Sat, 09 Sep 2017 01:30:00 +0000
http://juliadiffeq.org/2017/09/09/StiffDDESDE.html
http://juliadiffeq.org/2017/09/09/StiffDDESDE.htmlSDIRK Methods<p>This has been a very productive summer! Let me start by saying that a relative
newcomer to the JuliaDiffEq team, David Widmann, has been doing some impressive
work that has really expanded the internal capabilities of the ordinary and
delay differential equation solvers. Much of the code has been streamlined
due to his efforts which has helped increase our productivity, along with helping
us identify and solve potential areas of floating point inaccuracies. In addition,
in this release we are starting to roll out some of the results of the Google
Summer of Code projects. Together, there’s some really exciting stuff!</p>
Sun, 13 Aug 2017 01:30:00 +0000
http://juliadiffeq.org/2017/08/13/SDIRK.html
http://juliadiffeq.org/2017/08/13/SDIRK.htmlHigh Order Rosenbrock and Symplectic Methods<p>For awhile I have been saying that JuliaDiffEq really needs some fast high
accuracy stiff solvers and symplectic methods to take it to the next level.
I am happy to report that these features have arived, along with some other
exciting updates. And yes, they benchmark really well. With new Rosenbrock methods
specifically designed for stiff nonlinear parabolic PDE discretizations, SSPRK
enhancements specifically for hyperbolic PDEs, and symplectic methods for Hamiltonian
systems, physics can look at these release notes with glee. Here’s the full ecosystem
release notes.</p>
Fri, 07 Jul 2017 01:30:00 +0000
http://juliadiffeq.org/2017/07/07/SymplecticRosenbrock.html
http://juliadiffeq.org/2017/07/07/SymplecticRosenbrock.htmlFilling In The Interop Packages and Rosenbrock<p>In the <a href="http://www.stochasticlifestyle.com/differentialequations-jl-2-0-state-ecosystem/">2.0 state of the ecosystem post</a>
it was noted that, now that we have a clearly laid out and expansive common API,
the next goal is to fill it in. This set of releases tackles the lowest hanging
fruits in that battle. Specifically, the interop packages were setup to be as
complete in their interfaces as possible, and the existing methods which could
expand were expanded. Time for specifics.</p>
Thu, 18 May 2017 01:30:00 +0000
http://juliadiffeq.org/2017/05/18/Filling_in.html
http://juliadiffeq.org/2017/05/18/Filling_in.htmlDifferentialEquations.jl 2.0<p>This marks the release of ecosystem version 2.0. All of the issues got looked
over. All (yes all!) of the API suggestions that were recorded in issues in
JuliaDiffEq packages have been addressed! Below are the API changes that have occurred.
This marks a really good moment for the JuliaDiffEq ecosystem because it means all
of the long-standing planned API changes are complete. Of course new things may come
up, but there are no more planned changes to core functionality. This means that we can simply
work on new features in the future (and of course field bug reports as they come).
A blog post detailing our full 2.0 achievements plus our 3.0 goals will come out at
our one year anniversary. But for now I want to address what the API changes are,
and the new features of this latest update.</p>
Sun, 30 Apr 2017 01:30:00 +0000
http://juliadiffeq.org/2017/04/30/API_changes.html
http://juliadiffeq.org/2017/04/30/API_changes.htmlDifferentialEquations.jl v1.9.1<p>DifferentialEquations v1.9.1 is a feature update which, well, brings a lot of new
features. But before we get started, there is one thing to highlight:</p>
Fri, 07 Apr 2017 01:30:00 +0000
http://juliadiffeq.org/2017/04/07/features.html
http://juliadiffeq.org/2017/04/07/features.htmlDifferentialEquations.jl Workshop at JuliaCon 2017<p><a href="http://juliacon.org/2017/talks.html">There will be a workshop on DifferentialEquations.jl at this year’s JuliaCon!</a>
The title is “The Unique Features and Performance of DifferentialEquations.jl”.
The goal will be to teach new users how to solve a wide variety of differential
equations, and show how to achieve the best possible performance. I hope to lead
users through an example problem: start with ODEs and build a simple model. I
will show the tools for analyzing the solution to ODEs, show how to choose the
best solver for your problem, show how to use non-standard features like arbitrary
precision arithmetic. From there, we seamlessly flow into more in-depth
analysis and models. We will start estimating parameters of the ODEs, and then
make the models more realistic by adding delays, stochasticity (randomness), and
Gillespie models (discrete stochastic models related to differential equations),
and running stochastic Monte Carlo experiments in parallel (in a way that will
automatically parallelizes across multiple nodes of an HPC!).</p>
Tue, 04 Apr 2017 11:00:00 +0000
http://juliadiffeq.org/2017/04/04/juliacon.html
http://juliadiffeq.org/2017/04/04/juliacon.htmlDifferentialEquations.jl v1.8.0<p>DifferentialEquations.jl v1.8.0 is a new release for the JuliaDiffEq ecosystem.
As promised, the API is stable and there should be no breaking changes. The tag
PRs have been opened and it will takes a couple days/weeks for this to be available.
For an early preview, see <a href="http://docs.juliadiffeq.org/dev/">the in-development documentation</a>.
When the release is available, a new version of the documentation will be tagged.</p>
Thu, 09 Feb 2017 17:00:00 +0000
http://juliadiffeq.org/2017/02/09/interps.html
http://juliadiffeq.org/2017/02/09/interps.htmlDifferentialEquations.jl v1.6.0<p>DifferentialEquations.jl v1.6.0 is a stable version of the JuliaDiffEq ecosystem.
This tag includes many new features, including:</p>
Sat, 14 Jan 2017 17:00:00 +0000
http://juliadiffeq.org/2017/01/14/stable.html
http://juliadiffeq.org/2017/01/14/stable.htmlBase<p>A new set of tags will be going through over the next week. I am working with Tony to make sure there is no breakage, and for the most part the API has not changed. What has changed is the API for events and callbacks, there is a PR in DiffEqDocs.jl for the new API. The translation to the new API should be really easy: it’s almost the exact same thing but now a type-based API instead of a macro-based API (and will be cross-package). Also included is a new “integrator” interface which gives step-wise control over integration routines, starting with support from OrdinaryDiffEq.jl.</p>
Sun, 08 Jan 2017 00:00:00 +0000
http://juliadiffeq.org/2017/01/08/base.html
http://juliadiffeq.org/2017/01/08/base.htmlPDEs Update<p>Tags since the last blog post:</p>
Wed, 21 Dec 2016 17:00:00 +0000
http://juliadiffeq.org/2016/12/21/fem.html
http://juliadiffeq.org/2016/12/21/fem.htmlOrdinaryDiffEq v0.5<p>OrdinaryDiffEq.jl has received two tags. This latest tag, v0.5, adds compatibility with the latest Julia v0.6 nightly (similar changes have been added to many solvers like StochasticDiffEq.jl on master, but have not yet been tagged).</p>
Wed, 21 Dec 2016 17:00:00 +0000
http://juliadiffeq.org/2016/12/21/saveat.html
http://juliadiffeq.org/2016/12/21/saveat.htmlHello world<p>Hello world! This is the first post from the JuliaDiffEq organization. This
blog will be used to share the most recent updates to the JuliaDiffEq ecosystem.
Hopefully this will make it easier for everyone to follow our developments.</p>
Sat, 26 Nov 2016 17:00:00 +0000
http://juliadiffeq.org/2016/11/26/hello.html
http://juliadiffeq.org/2016/11/26/hello.html