Welcome to Intelligent Additive Manufacturing Lab

We're aiming to create a knowledge hub for 3D printing of the future.

Dynamic Mode Decomposition

On this page, we are going to explore the application of dynamic mode decomposition (DMD). Using the DMD spectrum of frequencies, we analyze a video clip and separate the video to both the foreground video and background. From the result, we can not only separate the foreground video and background but also detect some tiny motion that is hard to see by the naked eye.

Theoretical Background

The primary idea of DMD is to approximate the time-independent eigenvalues of the composition operator (also called the Koopman operator). So, we pick up each frame of a video to construct a vector .

(1)

represents the order of frames and the  represents the pixels. Then establish two matrices below by arranging these vectors.

(2)

(3)

The equation  and  are both  matrices.

For the eigenvalues of the composition operator, we have the expression:

(4)

By using pseudo-inverses, we can get the . But, at this moment,  would be  which is too large for computation. So, we will do singular value decomposition to the  to get a dimensionality reduction. There is:

(5)

Then, we can pick a specific rank r for to get a low dimensional structure . By using these matrices, we earn a reduced rank approximation of A below:

(6)

Thus, the DMD modes are below

(7)

Where and are the eigenvectors and eigenvalues of the matrix , and the coefficients  are the coordinates of  in the eigenvector basis.

As a result, this DMD spectrum of frequencies can be used to subtract background modes. Specifically, assume that , where  satisfies , and that is bounded away from zero. Thus,

(8)

Where  is the background of the video,  is the foreground of the video.

Result

Here are two video clips. The upper video is a raw MIT research introduction video.  (http://scienceido.com/LELAB/ or https://www.youtube.com/watch?v=vbl2pJLSoyU, 00:22 – 00:32)

And the bottom video is a background-removed clip. From the output video, we not only eliminate the background which we can treat is as noise but also we can find that the flat has a tiny wave movement which impacted by the fluid flowing down. More important is that It is hard to be observed by naked eyes!

So, we can use it as a boundary shape to take a fluid simulation or study the impaction of the fluid. It’s a good beginning to study the stress between different layers in 3D printing.

Code

clear; close all;
%properties of Video
nFrames=Video.NumberOfFrames;
vidHeight=Video.Height;
vidWidth=Video.Width;
duration = Video.Duration/2;
t = linspace(0, duration, nFrames); dt = t(2)-t(1);
%store data to matrix
images=zeros(vidHeight*vidWidth,nFrames);
for k=1:nFrames
%imwrite(im, ['original_frame',num2str(k),'.bmp'], 'bmp');%save im to image
images(:,k)=reshape(im2bw(im), [], 1);
end
% A=reshape(images(1,:),vidHeight,vidWidth); %show first row. It is the first frame.
% imshow(A)

X1=images(:,1:end-1);
X2=images(:,2:end);
[U1, S1, TV1]=svd(X1,'econ');
V1=pinv(TV1); %get Pseudo-inverse of transpose V,

%plot singular values
figure(1)
subplot(1,2,1)
plot(diag(S1)/sum(diag(S1)),'o')
xlabel('Order of singular value','Fontsize',12)
ylabel('Percentage of Singular Value','Fontsize',12)
%rank singular values
subplot(1,2,2)
temp_Cumsum=cumsum(abs(diag(S1)));
S1_Cumsum=temp_Cumsum/temp_Cumsum(end);
plot(1:length(S1_Cumsum), S1_Cumsum, 'o')
xlabel('Order of singular value','Fontsize',12)
ylabel('Percentage of Quality','Fontsize',12)
S1_Index=find(S1_Cumsum>0.90,1); % # singular value needed to keep 90% quality

%Dimensionality reduction
U2=U1(:,1:S1_Index);
S2_temp=sparse(S1); S2=S2_temp(1:S1_Index,1:S1_Index);
V2=TV1(:,1:S1_Index);
tilde_A=U2'*X2*V2/S2;
%eigen decomposition on tilde A
[EigV, D]=eig(tilde_A);
Phi=X2*V2/S2*EigV;

Lambda=diag(D);
omega=log(Lambda)/dt;
figure(2)
plot(abs(omega),'o')
title('Distribution of omega','Fontsize',12)
bg_index = find(abs(omega)==min(abs(omega))); %find background: ||w_p|| ≈ 0
omega_bg = omega(bg_index);
Phi_bg = Phi(:,bg_index);

u0=X1(:,1);
y0=Phi_bg\u0;
u_modes=zeros(length(omega_bg),length(t));
for iter=1:length(t)
u_modes(:,iter)=(y0.*exp(omega_bg*t(iter)));
end
u_dmd=Phi_bg*u_modes;

%Sparse
% u_sparse=images-abs(u_dmd);
% R=u_sparse.*(u_sparse<0);
% u_dmd=R+abs(u_dmd);
% u_sparse_dmd=u_sparse-R;

u_sparse=images-abs(u_dmd);
% u_s_dmd=u_sparse + 150;
figure(3)
for j=1:nFrames
imshow(reshape(u_dmd(:,j),vidHeight,vidWidth))
drawnow;
end

figure(4)
u_s_dmd=u_sparse+1;
for j=1:nFrames
imshow(reshape(u_s_dmd(:,10),vidHeight,vidWidth))
drawnow;
end