Implicit Nonlinear Dynamics : Michaelis Menten
What if you want to estimate an implicitly defined system of the form $f(u_t, u, p, t) = 0$? The solution : Implicit Sparse Identification. This method was originally described in this paper, and currently there exist robust algorithms to identify these systems. We will focus on Michaelis Menten Kinetics. As before, we will define the DataDrivenProblem
and the Basis
containing possible candidate functions for our sparse_regression!
. Lets generate some data! We will use two experiments starting from different initial conditions.
using DataDrivenDiffEq
using LinearAlgebra
using ModelingToolkit
using OrdinaryDiffEq
using Plots
function michaelis_menten(u, p, t)
[0.6 - 1.5u[1]/(0.3+u[1])]
end
u0 = [0.5]
problem_1 = ODEProblem(michaelis_menten, u0, (0.0, 4.0));
solution_1 = solve(problem_1, Tsit5(), saveat = 0.1);
problem_2 = ODEProblem(michaelis_menten, 2*u0, (4.0, 8.0));
solution_2 = solve(problem_2, Tsit5(), saveat = 0.1);
Since we have multiple trajectories at hand, we define a DataDrivenDataset
, which collects multiple problems but handles them as a unit for the processing.
function michaelis_menten(X::AbstractMatrix, p, t::AbstractVector)
reduce(hcat, map((x,ti)->michaelis_menten(x, p, ti), eachcol(X), t))
end
data = (
Experiment_1 = (X = Array(solution_1), t = solution_1.t, DX = michaelis_menten(Array(solution_1),[], solution_1.t) ),
Experiment_2 = (X = Array(solution_2), t = solution_2.t, DX = michaelis_menten(Array(solution_2),[], solution_2.t))
)
prob = DataDrivenDiffEq.ContinuousDataset(data);
plot(prob)
Next, we define our Basis
. Since we want to identify an implicit system, we have to include some candidate terms which use these as an argument and inform our constructor about the meaning of these variables.
@parameters t
D = Differential(t)
@variables u[1:1](t)
h = [monomial_basis(u[1:1], 4)...]
basis = Basis([h; h .* D(u[1])], u, implicits = D.(u), iv = t)
Model ##Basis#506 with 10 equations
States : u[1](t)
Independent variable: t
Equations
φ₁ = 1
φ₂ = u[1](t)
φ₃ = u[1](t)^2
φ₄ = u[1](t)^3
...
φ₁₀ = (u[1](t)^4)*Differential(t)(u[1](t))
Next, we define the ImplicitOptimizer
and solve
the problem. It wraps a standard optimizer, by default STLSQ
, and performs implicit sparse regression upon the selected basis. To improve our result, we batch the data by using a DataSampler
. Here, we use a train-test split of 0.8 and divide the training data into 10 batches. Since we are using a batching process, we can also use a different
sampler = DataSampler(
Split(ratio = 0.8), Batcher(n = 10)
)
opt = ImplicitOptimizer(1e-1:1e-1:5e-1)
res = solve(prob, basis, opt, normalize = false, denoise = false, by = :min, sampler = sampler, maxiter = 1000);
Linear Solution with 1 equations and 4 parameters.
Returncode: solved
L₂ Norm error : [0.0005232967550040873]
AIC : [1969.7812992345657]
R² : [1.066610823134123]
As we can see, the DataDrivenSolution
has good metrics. Furthermore, inspection of the underlying system shows that the original equations have been recovered correctly:
system = result(res)
plot(
plot(prob), plot(res), layout = (1,2)
)
Copy-Pasteable Code
using DataDrivenDiffEq
using LinearAlgebra
using ModelingToolkit
using OrdinaryDiffEq
function michaelis_menten(u, p, t)
[0.6 - 1.5u[1]/(0.3+u[1])]
end
u0 = [0.5]
problem_1 = ODEProblem(michaelis_menten, u0, (0.0, 4.0));
solution_1 = solve(problem_1, Tsit5(), saveat = 0.1);
problem_2 = ODEProblem(michaelis_menten, 2*u0, (4.0, 8.0));
solution_2 = solve(problem_2, Tsit5(), saveat = 0.1);
function michaelis_menten(X::AbstractMatrix, p, t::AbstractVector)
reduce(hcat, map((x,ti)->michaelis_menten(x, p, ti), eachcol(X), t))
end
data = (
Experiment_1 = (X = Array(solution_1), t = solution_1.t, DX = michaelis_menten(Array(solution_1),[], solution_1.t) ),
Experiment_2 = (X = Array(solution_2), t = solution_2.t, DX = michaelis_menten(Array(solution_2),[], solution_2.t))
)
prob = DataDrivenDiffEq.ContinuousDataset(data);
@parameters t
D = Differential(t)
@variables u[1:1](t)
h = [monomial_basis(u[1:1], 4)...]
basis = Basis([h; h .* D(u[1])], u, implicits = D.(u), iv = t)
println(basis) # hide
sampler = DataSampler(
Split(ratio = 0.8), Batcher(n = 10)
)
opt = ImplicitOptimizer(1e-1:1e-1:5e-1)
res = solve(prob, basis, opt, normalize = false, denoise = false, by = :min, sampler = sampler, maxiter = 1000);
println(res) # hide
system = result(res)
println(system) # hide
# This file was generated using Literate.jl, https://github.com/fredrikekre/Literate.jl
This page was generated using Literate.jl.