# Implicit Nonlinear Dynamics : Michaelis Menten

What if you want to estimate an implicitly defined system of the form $f(u_t, u, p, t) = 0$? The solution : Implicit Sparse Identification. This method was originally described in this paper, and currently there exist robust algorithms to identify these systems. We will focus on Michaelis Menten Kinetics. As before, we will define the DataDrivenProblem and the Basis containing possible candidate functions for our sparse_regression!. Lets generate some data! We will use two experiments starting from different initial conditions.

using DataDrivenDiffEq
using LinearAlgebra
using ModelingToolkit
using OrdinaryDiffEq
using Plots

function michaelis_menten(u, p, t)
[0.6 - 1.5u/(0.3+u)]
end

u0 = [0.5]

problem_1 = ODEProblem(michaelis_menten, u0, (0.0, 4.0));
solution_1 = solve(problem_1, Tsit5(), saveat = 0.1);
problem_2 = ODEProblem(michaelis_menten, 2*u0, (4.0, 8.0));
solution_2 = solve(problem_2, Tsit5(), saveat = 0.1);

Since we have multiple trajectories at hand, we define a DataDrivenDataset, which collects multiple problems but handles them as a unit for the processing.

function michaelis_menten(X::AbstractMatrix, p, t::AbstractVector)
reduce(hcat, map((x,ti)->michaelis_menten(x, p, ti), eachcol(X), t))
end

data = (
Experiment_1 = (X = Array(solution_1), t = solution_1.t, DX = michaelis_menten(Array(solution_1),[], solution_1.t) ),
Experiment_2 = (X = Array(solution_2), t = solution_2.t, DX = michaelis_menten(Array(solution_2),[], solution_2.t))
)

plot(prob)

Next, we define our Basis. Since we want to identify an implicit system, we have to include some candidate terms which use these as an argument and inform our constructor about the meaning of these variables.

@parameters t
D = Differential(t)
@variables u[1:1](t)
h = [monomial_basis(u[1:1], 4)...]
basis = Basis([h; h .* D(u)], u, implicits = D.(u), iv = t)
┌ Warning: The variable syntax (u[1:1])(t) is deprecated. Use (u(t))[1:1] instead.
│                   The former creates an array of functions, while the latter creates an array valued function.
│                   The deprecated syntax will cause an error in the next major release of Symbolics.
│                   This change will facilitate better implementation of various features of Symbolics.
└ @ Symbolics ~/.julia/packages/Symbolics/J8IHJ/src/variable.jl:129
Model ##Basis#485 with 10 equations
States : u(t)
Independent variable: t
Equations
φ₁ = 1
φ₂ = u(t)
φ₃ = u(t)^2
φ₄ = u(t)^3
...
φ₁₀ = (u(t)^4)*Differential(t)(u(t))

Next, we define the ImplicitOptimizer and solve the problem. It wraps a standard optimizer, by default STLSQ, and performs implicit sparse regression upon the selected basis. To improve our result, we batch the data by using a DataSampler. Here, we use a train-test split of 0.8 and divide the training data into 10 batches. Since we are using a batching process, we can also use a different

sampler = DataSampler(
Split(ratio = 0.8), Batcher(n = 10)
)

opt = ImplicitOptimizer(1e-1:1e-1:5e-1)
res = solve(prob, basis, opt,  normalize = false, denoise = false, by = :min, sampler = sampler, maxiter = 1000);
Linear Solution with 1 equations and 4 parameters.
Returncode: solved
L₂ Norm error : [0.0005232967550040873]
AIC : [-611.5396713416142]
R² : [0.9997517753058279]

As we can see, the DataDrivenSolution has good metrics. Furthermore, inspection of the underlying system shows that the original equations have been recovered correctly:

system = result(res)

plot(
plot(prob), plot(res), layout = (1,2)
)

## Copy-Pasteable Code

using DataDrivenDiffEq
using LinearAlgebra
using ModelingToolkit
using OrdinaryDiffEq

function michaelis_menten(u, p, t)
[0.6 - 1.5u/(0.3+u)]
end

u0 = [0.5]

problem_1 = ODEProblem(michaelis_menten, u0, (0.0, 4.0));
solution_1 = solve(problem_1, Tsit5(), saveat = 0.1);
problem_2 = ODEProblem(michaelis_menten, 2*u0, (4.0, 8.0));
solution_2 = solve(problem_2, Tsit5(), saveat = 0.1);

function michaelis_menten(X::AbstractMatrix, p, t::AbstractVector)
reduce(hcat, map((x,ti)->michaelis_menten(x, p, ti), eachcol(X), t))
end

data = (
Experiment_1 = (X = Array(solution_1), t = solution_1.t, DX = michaelis_menten(Array(solution_1),[], solution_1.t) ),
Experiment_2 = (X = Array(solution_2), t = solution_2.t, DX = michaelis_menten(Array(solution_2),[], solution_2.t))
)

@parameters t
D = Differential(t)
@variables u[1:1](t)
h = [monomial_basis(u[1:1], 4)...]
basis = Basis([h; h .* D(u)], u, implicits = D.(u), iv = t)
println(basis) # hide

sampler = DataSampler(
Split(ratio = 0.8), Batcher(n = 10)
)

opt = ImplicitOptimizer(1e-1:1e-1:5e-1)
res = solve(prob, basis, opt,  normalize = false, denoise = false, by = :min, sampler = sampler, maxiter = 1000);
println(res) # hide

system = result(res)
println(system) # hide

# This file was generated using Literate.jl, https://github.com/fredrikekre/Literate.jl