This article explores the implementation of gradient descent algorithms for minimizing global loss functions in neural networks, particularly in problems governed by Rankine-Hugoniot conditions. While gradient descent reliably converges, scalability issues arise when handling large domains with many coupled networks. To address this, a domain decomposition method (DDM) is introduced, enabling parallel optimization of local loss functions. The result is faster convergence, improved scalability, and a more efficient framework for training complex AI models.This article explores the implementation of gradient descent algorithms for minimizing global loss functions in neural networks, particularly in problems governed by Rankine-Hugoniot conditions. While gradient descent reliably converges, scalability issues arise when handling large domains with many coupled networks. To address this, a domain decomposition method (DDM) is introduced, enabling parallel optimization of local loss functions. The result is faster convergence, improved scalability, and a more efficient framework for training complex AI models.

Why Gradient Descent Converges (and Sometimes Doesn’t) in Neural Networks

2025/09/19 18:38
3분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Abstract and 1. Introduction

1.1. Introductory remarks

1.2. Basics of neural networks

1.3. About the entropy of direct PINN methods

1.4. Organization of the paper

  1. Non-diffusive neural network solver for one dimensional scalar HCLs

    2.1. One shock wave

    2.2. Arbitrary number of shock waves

    2.3. Shock wave generation

    2.4. Shock wave interaction

    2.5. Non-diffusive neural network solver for one dimensional systems of CLs

    2.6. Efficient initial wave decomposition

  2. Gradient descent algorithm and efficient implementation

    3.1. Classical gradient descent algorithm for HCLs

    3.2. Gradient descent and domain decomposition methods

  3. Numerics

    4.1. Practical implementations

    4.2. Basic tests and convergence for 1 and 2 shock wave problems

    4.3. Shock wave generation

    4.4. Shock-Shock interaction

    4.5. Entropy solution

    4.6. Domain decomposition

    4.7. Nonlinear systems

  4. Conclusion and References

3. Gradient descent algorithm and efficient implementation

In this section we discuss the implementation of gradient descent algorithms for solving the minimization problems (11), (20) and (35). We note that these problems involve a global loss functional measuring the residue of HCL in the whole domain, as well Rankine-Hugoniot conditions, which results in training of a number of neural networks. In all the tests we have done, the gradient descent method converges and provides accurate results. We note also, that in problems with a large number of DLs, the global loss functional couples a large number of networks and the gradient descent algorithm may converge slowly. For these problems we present a domain decomposition method (DDM).

3.1. Classical gradient descent algorithm for HCLs

All the problems (11), (20) and (35) being similar, we will demonstrate in details the algorithm for the problem (20). We assume that the solution is initially constituted by i) D ∈ {1, 2, . . . , } entropic shock waves emanating from x1, . . . , xD, ii) an arbitrary number of rarefaction waves, and that iii) there is no shock generation for t ∈ [0, T].

\

\

3.2. Gradient descent and domain decomposition methods

Rather than minimizing the global loss function (21) (or (12), (36)), we here propose to decouple the optimization of the neural networks, and make it scalable. The approach is closely connected to domain decomposition methods (DDMs) Schwarz Waveform Relaxation (SWR) methods [21, 22, 23]. The resulting algorithm allows for embarrassingly parallel computation of minimization of local loss functions.

\ \

\ \ \

\ \ \

\ \ In conclusion, the DDM becomes relevant thanks to its scalability and for kDDMkLocal < kGlobal, which is expected for D large.

\

:::info Authors:

(1) Emmanuel LORIN, School of Mathematics and Statistics, Carleton University, Ottawa, Canada, K1S 5B6 and Centre de Recherches Mathematiques, Universit´e de Montr´eal, Montreal, Canada, H3T 1J4 (elorin@math.carleton.ca);

(2) Arian NOVRUZI, a Corresponding Author from Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON K1N 6N5, Canada (novruzi@uottawa.ca).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

시장 기회
스레숄드 로고
스레숄드 가격(T)
$0.006136
$0.006136$0.006136
+1.13%
USD
스레숄드 (T) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!