Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HELP~ I could not understand the code in 03 finite_differences #14

Open
KYRA-ma opened this issue Sep 19, 2022 · 3 comments
Open

HELP~ I could not understand the code in 03 finite_differences #14

KYRA-ma opened this issue Sep 19, 2022 · 3 comments

Comments

@KYRA-ma
Copy link

KYRA-ma commented Sep 19, 2022

Dear @mandli,
when I learn the 03 finite_differences file, I could not understand the part of Error Analysis. Could you give me some help? thanks!!Please forgive me for asking some naive questions since I'm new to this area. Here is the list of questions :

  1. when computing the error, why we use the norm?
    error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat + delta_x[-1]) - f_prime_hat), ord=numpy.infty))
  2. when using the second-order differences for points at edge of domain,how to get the coefficient of the expression like -3.0,4.0,?
    f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) + - f(x_hat[2])) / (2.0 * delta_x[-1]) f_prime_hat[-1] = ( 3.0 * f(x_hat[-1]) + -4.0 * f(x_hat[-2]) + f(x_hat[-3])) / (2.0 * delta_x[-1])
@mandli
Copy link
Owner

mandli commented Sep 19, 2022

@kamyinnnlok some quick answers for you:

  1. The norm used is semi arbitrary although usually there is one norm that is appropriate or usually used for each problem.
  2. These are the forward and backward, 2nd order accurate finite differences that keep only a single point on the boundary that can then be specified with a boundary condition (usually).

@KYRA-ma
Copy link
Author

KYRA-ma commented Sep 19, 2022

@mandli thanks for your apply.
But I still don't understand the 2nd answer how to derive the expression above. Could you derive the expression
f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) + - f(x_hat[2])) / (2.0 * delta_x[-1]) in detail please? Thanks again~
Just forgive me that I have more questions:

  1. when I learn the 04_BVP_problems tonight, I am confused about the Green's functions. Is the model of that is fixed as (I can't type the formula below so it seem like ugly. Just forgive me hahah)
    $$
    G(x; \bar{x}) = \left { \begin{aligned}
    (\bar{x} - 1) x & & 0 \leq x \leq \bar{x} \
    \bar{x} (x - 1) & & \bar{x} \leq x \leq 1
    \end{aligned} \right . .
    $$ ? How to get it?

  2. I could not understand the 2nd answer in the exercise of Neumann Boundary Conditions in 04_BVP_problems. It seems like the 2nd finite differences. How to get the A[-1, -1] = -1.0 / (delta_x) A[-1, -2] = 1.0 / (delta_x). In the 3rd answer of the exercise,how to get the A[-1, -1] = 3.0 / (2.0 * delta_x) A[-1, -2] = -4.0 / (2.0 * delta_x) A[-1, -3] = 1.0 / (2.0 * delta_x)

Thanks again if you could help me!!!!

@mandli
Copy link
Owner

mandli commented Sep 23, 2022

@kamyinnnlok The derivation of the finite difference being used is in the 03 notebook. It may be helpful to look at the more detailed derivation in the other course notes that exist. For your other questions:

  1. The Green's function for the Laplacian is a common derivation in almost all PDE textbooks. Suffice to say that the principle is that the Green's function is derived from using the $\delta$ function as a forcing on the right hand side and is the result of solving the PDE $\nabla^2 u = \delta(x)$.
  2. Since the boundary is a derivative these are coming from the finite difference approximation at the boundary, which we know though not what $u(x)$ is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants