iMechanica - deep learning
https://imechanica.org/taxonomy/term/12101
enComputational Mechanics Postdoctoral Research Scientist Position at Columbia University
https://imechanica.org/node/24246
<div class="field field-name-taxonomy-vocabulary-6 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/73">job</a></div></div></div><div class="field field-name-taxonomy-vocabulary-8 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/12101">deep learning</a></div><div class="field-item odd"><a href="/taxonomy/term/169">Plasticity</a></div><div class="field-item even"><a href="/taxonomy/term/179">solid mechanics</a></div><div class="field-item odd"><a href="/taxonomy/term/31">fracture</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Dear colleagues, </p>
<p>There is a new opening for one postdoc position, to be filled immediately, in my research group in the Department of Civil Engineering and Engineering Mechanics at Columbia University. We are looking for postdocs in the broad area of computational mechanics. Candidates should have expertise in modeling dynamic responses of path-dependent materials. Our project is specifically focused on applications of <strong>machine learning (reinforcement learning, graph embedding) for computational plasticity and damage.</strong> </p>
<p><strong>USA citizenship is highly preferred but exceptional cases will be considered</strong>.<span> </span><strong>Candidates from underrepresented groups are encouraged to apply. </strong></p>
<p>Please send a CV, a brief summary of research interests and skills, published journal articles, and the names, affiliations, phone numbers, and email addresses of three references to <a href="mailto:wsun@columbia.edu">wsun@columbia.edu</a> Evaluation of candidates will begin immediately and will continue until the opening is filled. This position is available immediately. More information about the research group can be found at </p>
<p><a href="https://www.poromechanics.org/recruitment.html">https://www.poromechanics.org/recruitment.html</a></p>
<p><a href="https://www.poromechanics.org/team-members.html">https://www.poromechanics.org/team-members.html</a></p>
<p><a href="https://poromechanics.weebly.com/publications.html">https://poromechanics.org/publications.html</a></p>
<p><a href="https://poromechanics.weebly.com/research.html">https://poromechanics.org/research.html</a></p>
<p><a href="https://poromechanics.weebly.com/pi.html">https://poromechanics.org/pi.html</a></p>
<p>For any questions, please contact me via email <a href="mailto:wsun@columbia.edu">wsun@columbia.edu</a> (contact information below). </p>
<p>Best Regards,</p>
<p>Steve </p>
</div></div></div>Fri, 29 May 2020 14:14:00 +0000WaiChing Sun24246 at https://imechanica.orghttps://imechanica.org/node/24246#commentshttps://imechanica.org/crss/node/24246SciANN: Scientific computations and physics-informed deep learning using artificial neural networks
https://imechanica.org/node/24236
<div class="field field-name-taxonomy-vocabulary-6 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/76">research</a></div></div></div><div class="field field-name-taxonomy-vocabulary-8 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/12101">deep learning</a></div><div class="field-item odd"><a href="/taxonomy/term/12834">neural networks</a></div><div class="field-item even"><a href="/taxonomy/term/12835">physics-informed</a></div><div class="field-item odd"><a href="/taxonomy/term/347">elasticity</a></div><div class="field-item even"><a href="/taxonomy/term/169">Plasticity</a></div><div class="field-item odd"><a href="/taxonomy/term/12836">variational</a></div><div class="field-item even"><a href="/taxonomy/term/12837">PINN</a></div><div class="field-item odd"><a href="/taxonomy/term/12838">regression</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><span>Interested in deep learning, scientific computations, solution, and inversion methods for PDE? </span></p>
<p>Check out the preprint at: </p>
<p><a href="https://www.researchgate.net/publication/341478559_SciANN_A_Keras_wrapper_for_scientific_computations_and_physics-informed_deep_learning_using_artificial_neural_networks">https://www.researchgate.net/publication/341478559_SciANN_A_Keras_wrappe...</a></p>
<p> </p>
<p><img src="https://imechanica.org/files/sciann.png" alt="" width="600" height="444" /></p>
<p> </p>
<p>Some problems are shared in our GitHub repository on how to use sciann for inversion and forward solution of:</p>
<p>- Linear Elasticity: <a href="https://github.com/sciann/sciann-applications/tree/master/SciANN-SolidMechanics">https://github.com/sciann/sciann-applications/tree/master/SciANN-SolidMe...</a></p>
<p>- Burgers equation: <a href="https://github.com/sciann/sciann-applications/blob/master/SciANN-BurgersEquation">https://github.com/sciann/sciann-applications/blob/master/SciANN-Burgers...</a></p>
<p>- Regression: <a href="https://github.com/sciann/sciann-applications/tree/master/SciANN-NN-Regression ">https://github.com/sciann/sciann-applications/tree/master/SciANN-NN-Regr...</a></p>
<p>- More is coming (Elasto-Plasticity,...)</p>
<p><a href="https://github.com/sciann/sciann-applications">https://github.com/sciann/sciann-applications</a></p>
<p> </p>
<p> </p>
<p> </p>
</div></div></div><div class="field field-name-upload field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><table class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/sciann.png" type="image/png; length=317527">sciann.png</a></span></td><td>310.08 KB</td> </tr>
</tbody>
</table>
</div></div></div>Mon, 25 May 2020 16:10:09 +0000haghighat24236 at https://imechanica.orghttps://imechanica.org/node/24236#commentshttps://imechanica.org/crss/node/24236Journal Club for February 2020: Machine Learning in Mechanics: simple resources, examples & opportunities
https://imechanica.org/node/23957
<div class="field field-name-taxonomy-vocabulary-6 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/76">research</a></div></div></div><div class="field field-name-taxonomy-vocabulary-8 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/10902">machine learning</a></div><div class="field-item odd"><a href="/taxonomy/term/12101">deep learning</a></div><div class="field-item even"><a href="/taxonomy/term/11415">data-driven</a></div><div class="field-item odd"><a href="/taxonomy/term/584">mechanics</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Machine learning (ML) in Mechanics is a fascinating and timely topic. This article follows from a kind invitation to provide some thoughts about the use of ML algorithms to solve mechanics problems by overviewing my past and current research efforts along with students and collaborators in this field. A brief introduction on ML is initially provided for the colleagues not familiar with the topic, followed by a section about the usefulness of ML in Mechanics, and finally I will reflect on the challenges and opportunities in this field. You are welcome to comment and provide suggestions for improvement.</p>
<p><strong>1. INTRODUCTION (for people not familiar with ML)</strong></p>
<p>ML has been around for more than 60 years, alternating through periods of euphoria and skepticism – see timeline in <a href="https://www.import.io/post/history-of-deep-learning/">https://www.import.io/post/history-of-deep-learning/</a>. In the last decade(s), however, the field has become much more accessible to non-specialists due to open source software written in general-purpose programming languages (e.g. Python). Unsurprisingly, early adopters of ML are found in fields with open databases gathering vast amounts of data from simulations or experiments, such as bioinformatics (e.g. <a href="https://www.ncbi.nlm.nih.gov/">https://www.ncbi.nlm.nih.gov/</a>), chemistry (e.g. <a href="https://pubchem.ncbi.nlm.nih.gov/">https://pubchem.ncbi.nlm.nih.gov/</a>) and materials science (e.g. <a href="http://oqmd.org/">http://oqmd.org/</a> and <a href="http://aflowlib.org/">http://aflowlib.org/</a>). Yet, ML can impact any scientific discipline where enough data about a problem is available. Mechanics is no exception, despite its specific challenges (see section 3).</p>
<p>For the purposes of this discussion, I will focus on two branches of ML: supervised learning (regression and classification), and unsupervised learning (clustering). The third main branch, reinforcement learning, is not discussed here.</p>
<p>If you are new to ML, and you want to get some hands-on experience, I recommend to start with Scikit-Learn (<a href="https://scikit-learn.org/">https://scikit-learn.org/</a>) – no conflict of interests in this recommendation. Its user interface is simple, as it is written in Python, and the documentation is very useful as it starts with the most restrictive algorithms (e.g. linear models such as ordinary least squares) and ends with the least restrictive ones (e.g. neural networks). This is important because it provides the correct mindset when applying ML algorithms to solve a problem: in the same way that you don’t use an airplane to go from the living room to the kitchen; you probably don’t want to use a neural network to discover that your data is described by a second order polynomial. In addition, Scikit-learn’s documentation also has a simple introduction for each algorithm and multiple examples (including source code). This allows you to try different algorithms even before you understand their inner workings and why some algorithms are better choices than others for a given problem.</p>
<p>I created a script with simple examples that requires Scikit-learn and Keras (<a href="https://keras.io/">https://keras.io/</a>) and that can be used to replicate the figures of this section and get more information about error metrics. The script is callend "Intro2ML_regression.txt" and it requires that you have Scikit-learn and Keras installed in your system. Please chance the extension from .txt to .py to run it. Keras is a high-level Application Program Interface (API) to create neural network models. It interfaces with powerful ML platforms with optimized performance, TensorFlow or Theano, while keeping them user friendly.</p>
<p><strong>1.1. Supervised learning: regression</strong></p>
<p>Even if you are not familiar with ML, as a researcher/practitioner in engineering you probably faced simple regression tasks (curve fitting) in which you were given a set of discrete points (training data) for which you want to find a continuous function (model) that interpolates all the points or that describes the general behavior of the data by smoothing it (e.g. least squares regression). The outcome of this task is a parametric function that describes your data, e.g. y=ax3+bx+c, and that should predict the data points that were not included in the fitting process (testing data). Let’s consider an example: the figure below shows the best fit of three different polynomials using 8 training points (red dots). The red dashed line represents the true function (to be discovered) and the black crosses represent testing points within that function that were not used for fitting the polynomials. Note that the polynomial of degree 7 is interpolatory, while the other two polynomials are not (smoothing the data). The quality of the prediction is then assessed by simple error metrics, e.g. mean least squares, calculated between the prediction of the model at the test points. For example, the test points close to x=0 in the figure below are very far from the predictions provided by the three polynomial models, leading to a large mean squared error.</p>
<p><img src="https://imechanica.org/files/Poly_xsinx_8TrainPoints_noiseless_2.png" alt="Polynomial fitting with 8 training points (noiseless)" width="400" height="300" /></p>
<p>Similarly to polynomial fitting, ML algorithms are also capable of regression via supervised learning. However, ML algorithms create models that are not parametric. In other words, these algorithms have so many parameters that it does not make sense to try to obtain the underlying analytical expression. Therefore, people colloquially refer to ML models as black boxes: they are predictive, but we don’t know the analytical form of the model. We lose insight, but we gain very powerful regression tools.</p>
<p><strong>1.1.1. Choosing an ML algorithm</strong></p>
<p>To illustrate the argument of not using an airplane to go from the living room to the kitchen, consider two classic ML algorithms:</p>
<p>Neural Networks (NNs) in their non-Bayesian formulation: the quintessential ML algorithm, based on millions of combinations of simple transfer functions connecting neurons (nodes) that depend on weights. Training involves finding this vast amount of weights such that they fit the data accordingly. Neural networks are very simple algorithms, so they are extremely scalable (they can easily deal with millions of data points). However, as you can imagine, training is not a trivial process because of the overwhelming possibilities for finding the weights of every connection.</p>
<p>Gaussian Processes (GPs): a Bayesian or probabilistic ML algorithm, meaning that it predicts the response as well as the uncertainty. This algorithm has outstanding regression capabilities and, in general, is easy to train. Unfortunately, this algorithm is poorly scalable, so it is limited to small datasets (approximately 10,000 points).</p>
<p>The figure below shows the result of training GPs and NNs for the same training data used above for polynomial fitting. As can be observed, GPs provide an excellent approximation of the function x*sin(x) even when using only 8 points, and they also quantify the uncertainty away from the training data. Yet, NNs are much harder to train on such a scarce dataset. In the script made available here, there is a grid search that helps finding reasonable parameters (neurons, epochs, etc.) for the neural network via cross validation, but as you can try by yourself, finding an appropriate model is not trivial. The script also includes much more information than what I am covering here, such as the loss obtained on the training and test data with the number of epochs, and the mean squared errors for the three approximation algorithms. The main point here is to illustrate my airplane analogy: NNs are in general not the best choice to approximate low dimensional and scarce data. Note, however, that both GPs and NNs approximate the function clearly better than polynomial regression.</p>
<p><img src="https://imechanica.org/files/GPR_NN_8TrainPoints_noiseless_0.png" alt="GPR vs NN with 8 training points (noiseless)" width="801" height="301" /></p>
<p><strong>1.1.2. Do we know a priori if there are “enough” training points?</strong></p>
<p>No. Consider a second example where the function to discover is now x*sin(3x) within the same domain. If we use 8 training points, as we did for finding x*sin(x), our approximation is obviously worst, as seen in the figure below. So, we need more training data to approximate this new function with similar quality. Hence, we only know that we have enough training data by iteratively assessing the error of our approximation against the test data. We stop increasing the training data size when the error converges to a reasonable value (or when you don’t have any more data!). In practice, the training data size may need to be very large when our problem has many input variables due to the curse of dimensionality: every additional input variable causes an exponential growth of the volume that needs to be sampled. This can be a strong limitation because we may not have thousands or millions of training datapoints required for appropriate learning.</p>
<p><img src="https://imechanica.org/files/GPR_8vs20_TrainPoints_noiseless_0.png" alt="GPR vs NN with 8 training points (noiseless)" width="800" height="300" /></p>
<p><strong>1.1.3. What happens if my data is noisy or uncertain?</strong></p>
<p>Noisy datasets are very common in Engineering. Data coming from experiments is often dependent on environmental fluctuations, material defects, etc. Note that even when the data comes from simulations, datasets can still be noisy (stochastic). For example, simulations of structures undergoing buckling depend on the geometric imperfections considered in the model. If these imperfections are sampled from a statistical distribution, then the properties will not be deterministic. The figure below illustrates the nefarious effects of stochasticity. The data is generated by perturbing the same x*sin(x) function but now we perturb it with Gaussian noise where (twice) the standard deviation is represented by the vertical bars in the data. As seen in the figure, the polynomial of degree 19 (interpolatory) suffers from severe overfitting, while GPs and NNs can be trained to predict the response appropriately. However, NNs need to be trained ensuring that overfitting does not happen, which in practice occurs by training via cross validation and observing the evolution of the loss data in both the training as well as the testing data.</p>
<p><img src="https://imechanica.org/files/Poly_vs_GPR_vs_NN_20TrainPoints_NOISY_0.png" alt="Comparison of the 3 algorithms for noisy datataset" width="800" height="618" /></p>
<p><strong>1.1.4. What is the difference between ML and Optimization?</strong></p>
<p>ML aims at learning a model that describes your entire dataset and predicts the response everywhere, whereas optimization aims at finding the optimum point, i.e. it does not care about regions that are far from the optimum. In principle, if you just want to find the optimum once, you should not need to use traditional ML algorithms, as direct optimization is likely to be faster. However, sometimes ML can help the optimization process because it can provide a surrogate model, i.e. a model that describes the data reasonably well and that is differentiable. An easy example to visualize the benefits of having a surrogate model from ML is to consider the above noisy dataset. ML algorithms can learn the underlying function and disregard the noise (smoothen the response surface) by treating it as uncertainty, which significantly facilitates the task of finding an optimum. This has the added benefit of enabling the use of gradient based optimization methods! However, there is a trade-off. If you spend a lot of computational effort in learning regions of the surrogate model that are far from the optimum, you are wasting resources that could be better exploited by the optimization algorithm alone. Hence, the trade-off is between exploration of new regions and exploitation of regions where the likelihood of finding the optimum value is high.</p>
<p><strong>1.2. Supervised learning: classification</strong></p>
<p>Classification is probably the most widely known task in ML. Instead of learning a continuous model, the ML algorithm learns discrete classes. Having understood how to train for regression tasks, classification is not significantly different. The typical example is learning from (many) images of cats and dogs how to classify an unseen image as a cat or as a dog. Training occurs in a very similar way as what was described in 1.1. As referred in section 2, classification can be useful to determine the bounds of the design space that contain interesting information (where you can perform optimization or regression).</p>
<p><strong>1.3. Unsupervised learning</strong></p>
<p>These ML algorithms attempt to find patterns in data without pre-existing labels. Clustering is a typical task of unsupervised learning, where the algorithm finds groups of points that have similar characteristics according to a chosen metric. In mechanics, people familiar with Voronoi diagrams should have a good idea about how clustering works, as this algorithm for the basis for an important unsupervised learning method called k-means clustering. The next section also provides an example from our work that illustrates the usefulness of unsupervised learning in mechanics.</p>
<p><strong>2. IS ML USEFUL IN MECHANICS?</strong></p>
<p>ML is especially useful when we have access to large databases for the problems of interest and when the analytical solution is not known. In mechanics, these can be difficult conditions to meet, as it will be discussed in Section 3. Yet, there are multiple situations where ML algorithms can be very useful in solving Mechanics problems. I will highlight a few of our examples and provide links to the codes that can be used for different problems. Our group is working towards making these codes more transferrable and more user-friendly. I also invite you to share your own examples in the comments below.</p>
<p><strong>2.1. Classification and regression: Metamaterial design assisted by Bayesian machine learning [1]</strong></p>
<p><img src="https://imechanica.org/files/macro_supercompressible_metamaterial_0.png" alt="Macroscopic metamaterial" width="800" height="705" /></p>
<p>Metamaterial design is primarily a geometric exploration aiming at obtaining new material properties. Mechanical metamaterials tend to explore extreme buckling and postbuckling behavior, which means that the properties of interest are sensitive to geometric imperfections. Designing under these circumstances is challenging because the database includes uncertain properties (noise), and the regions of interest can be narrow (in the figure above, the yellow region corresponds to the only region where the metamaterial topology is classified as reversibly supercompressible). As illustrated at the end of section 1, ML is particularly suitable for learning under uncertainty, and in the case of Bayesian ML it can even predict the uncertainty of the model predictions. This facilitates the navigation of the design space, and the subsequent robust optimization. Using a new Bayesian ML method [2] that extends the traditional Gaussian Processes to larger database sizes (>100,000 training points), we showed that a metamaterial could become supercompressible [1] by learning directly from finite element simulations and without conducting experiments at the metamaterial level (only characterizing the base material). The supercompressible metamaterial was then validated experimentally a posteriori by mechanically testing 3D printed specimens created with brittle base materials at the macroscale (figure above) and micro-scale (figure below).</p>
<p>The code used in this work is available: <a href="https://github.com/mabessa/F3DAS">https://github.com/mabessa/F3DAS</a></p>
<p>General purpose video about this work: <a href="https://www.youtube.com/watch?v=cWTWHhMAu7I">https://www.youtube.com/watch?v=cWTWHhMAu7I</a></p>
<p><img src="https://imechanica.org/files/micro_supercompressible_metamaterial_0.png" alt="Microscopic metamaterial" width="800" height="625" /></p>
<p><strong>2.2. Clustering: Data-driven modeling of plastic properties of heterogeneous materials [3,4]</strong></p>
<p><img src="https://imechanica.org/files/clustering_example_0.png" alt="Clustering step in SCA method" width="550" height="147" /></p>
<p>In work developed in collaboration with Zeliang Liu and Wing Kam Liu [3], we created a new numerical method called Self-consistent Clustering Analysis (SCA) where the computational time of simulations of plastic heterogeneous materials with damage is reduced from 1 day in a super-computer to a few seconds/minutes. One of the key ideas of this method was to use clustering ML algorithms to compress the information of heterogeneous materials by grouping points that have similar mechanical behavior. This enabled the creation of databases of adequate size to model the variation of toughness of these composites via a data-driven framework we created in [4]. The code for the data driven framework is similar to the previous one (F3DAS), but it is tailored to creating representative volume elements of materials: <a href="https://github.com/mabessa/F3DAM">https://github.com/mabessa/F3DAM</a></p>
<p>The figure below shows the variation of composite toughness for different ductile matrix materials that was learned with the data-driven framework. In the figure each blue dot represents a simulation obtained by SCA.</p>
<p><img src="https://imechanica.org/files/composite_toughness_0.png" alt="Data-driven analysis of composite toughness" width="650" height="249" /> </p>
<p><strong>2.3. Deep learning: Predicting history- or path-dependent phenomena with Recurrent Neural Networks [5]</strong></p>
<p>One difficulty we faced in [4] was the inability to learn history-dependent or path-dependent phenomena using ML algorithms. This is non-trivial because the ML map that needs to be built between inputs and outputs is no longer unique, i.e. there are different paths to reach the same state and the state depends on the history of states that occurred previously. However, in the work developed in collaboration with Mojtaba Mozaffar and others [4], we showed that recurrent neural networks (RNNs) had the ability to learn path-dependent plasticity. In principle, any irreversible time- or path-dependent process can be learned by RNNs or an equivalent algorithm, although the training process can be difficult. The same base code as in [4] was used here, , this time using RNNs as the ML algorithm: <a href="https://github.com/mabessa/F3DAM">https://github.com/mabessa/F3DAM</a></p>
<p>The figure below shows the yield-surface evolution under different deformation conditions and paths. Finite element predictions are shown in dotted lines, while the deep learning predictions are shown in solid lines. Note the distortion of the yield surfaces with different loading paths, which is predicted by the RNNs.</p>
<p><img src="https://imechanica.org/files/RNN_plasticity_yield_surface_0.png" alt="RNN predicts path-dependent plasticity" width="500" height="406" /></p>
<p><strong>3. CHALLENGES AND OPPORTUNITITES</strong></p>
<p>There are significant challenges and opportunities within the field of ML and Mechanics. Coincidentally, last week there was a CECAM-Lorentz workshop co-organized by Pedro Reis (EPFL), Martin van Hecke (Leiden/AMOLF), Mark Pauly (EPFL) and me (TU Delft) where there was an open discussion about this topic. The following points reflect part of the discussions, but they result from my own views and personal perspective on the topic. I am happy to expand this list based on feedback:</p>
<p>1. High-throughput experiments: data hungry ML algorithms create challenges to experimentalists because they promote the creation of high-throughput experiments. Furthermore, open sharing of data via centralized repositories may become a necessity.</p>
<p>2. Improved efficiency of complex simulations: there are many simulations whose high computational expense still impairs the use of machine learning and data-driven methods. In solids, fracture and fatigue remain a challenge, while in fluids one of the concerns is turbulence. Development of new reduced order models [3,6,7] is fundamental for advancements in the field.</p>
<p>3. Physics-informed ML: an alternative to creating more data is to encode prior information in ML algorithms based on known physics, such that they require less data to learn [8,9]. For example, Thomas et al. [8] encoded rotation- and translation-invariance into neural networks, which facilitates learning in different settings (think about molecules rotating in an atomistic simulation).</p>
<p>4. Uncertainty quantification for very large dimensional problems: Zhu and Zabaras [9] have shown how Bayesian deep convolutional encoder-decoder networks are powerful methods for uncertainty quantification in very large dimensional data. This can open new ways of dealing with large data while still quantifying uncertainty.</p>
<p>5. ML to discover new laws of physics: Smidt and Lipson [10] have shown that genetic programming symbolic regression can learn Hamiltonians and Lagrangians by observing different systems, such as a swinging double pendulum. This is a different approach to artificial intelligence, where evolutionary algorithms are used to create equations (eliminating the “black box”). Brunton et al. [11] developed a fast alternative by limiting the form of the governing equations and leveraging sparsity techniques. These and other efforts towards learning the underlying equations that describe a physical process are important because such laws are extrapolatory, which is not quite the case for the other ML techniques.</p>
<p>6. Reinforcement learning and Bayesian optimization: there are techniques where ML algorithms are used to perform optimization by avoiding pre-sampling of the design space. Online Gaussian Processes [12], for example, are a type of Bayesian optimization algorithm that sits in between ML and Optimization. They are capable of finding a balance between exploration and exploitation in an attempt to quickly reach a global optimum.</p>
<p>7. Mechanics to improve the understanding of ML: a puzzling article by Geiger et al. [13] highlights that it is not just ML that can help Mechanics, but in fact Mechanics can help ML. Granular media may help uncover properties of deep neural networks! Certainly, there are many more interesting directions to pursue following other recent works [14-17], among many others. I apologize in advance for not including more articles from other colleagues.</p>
<p>REFERENCES</p>
<p>[1] Bessa, M. A., Glowacki, P., & Houlder, M. (2019). Bayesian Machine Learning in Metamaterial Design: Fragile Becomes Supercompressible. Advanced Materials, 31(48), 1904845.</p>
<p>[2] Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452-459.</p>
<p>[3] Liu, Z., Bessa, M. A., & Liu, W. K. (2016). Self-consistent clustering analysis: an efficient multi-scale scheme for inelastic heterogeneous materials. Computer Methods in Applied Mechanics and Engineering, 306, 319-341.</p>
<p>[4] Bessa, M. A., Bostanabad, R., Liu, Z., Hu, A., Apley, D. W., Brinson, C., Chen, W. & Liu, W. K. (2017). A framework for data-driven analysis of materials under uncertainty: Countering the curse of dimensionality. Computer Methods in Applied Mechanics and Engineering, 320, 633-667.</p>
<p>[5] Mozaffar, M., Bostanabad, R., Chen, W., Ehmann, K., Cao, J., & Bessa, M. A. (2019). Deep learning predicts path-dependent plasticity. Proceedings of the National Academy of Sciences, 116(52), 26414-26420.</p>
<p>[6] Moulinec, H., & Suquet, P. (1998). A numerical method for computing the overall response of nonlinear composites with complex microstructure. Computer methods in applied mechanics and engineering, 157(1-2), 69-94.</p>
<p>[7] Ladevèze, P., Passieux, J. C., & Néron, D. (2010). The latin multiscale computational method and the proper generalized decomposition. Computer Methods in Applied Mechanics and Engineering, 199(21-22), 1287-1296.</p>
<p>[8] Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., & Riley, P. (2018). Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219.</p>
<p>[9] Raissi, M., & Karniadakis, G. E. (2018). Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics, 357, 125-141.</p>
<p>[10] Zhu, Y., & Zabaras, N. (2018). Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366, 415-447.</p>
<p>[11] Schmidt, M., & Lipson, H. (2009). Distilling free-form natural laws from experimental data. science, 324(5923), 81-85.</p>
<p>[12] Brunton, S. L., Proctor, J. L., & Kutz, J. N. (2016). Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15), 3932-3937.</p>
<p>[13] González, J., Dai, Z., Hennig, P., & Lawrence, N. (2016, May). Batch bayesian optimization via local penalization. In Artificial intelligence and statistics (pp. 648-657).</p>
<p>[14] Geiger, M., Spigler, S., d'Ascoli, S., Sagun, L., Baity-Jesi, M., Biroli, G., & Wyart, M. (2019). Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Physical Review E, 100(1), 012115.</p>
<p>[15] B. Le, J. Yvonnet, Q.-C. He, Computational homogenization of nonlinear elastic materials using neural networks, Internat. J. Numer. Methods Engrg. 104 (12) (2015) 1061–1084.</p>
<p>[16] Kirchdoerfer, T., & Ortiz, M. (2017). Data driven computing with noisy material data sets. Computer Methods in Applied Mechanics and Engineering, 326, 622-641.</p>
<p>[17] Banerjee, R., Sagiyama, K., Teichert, G. H., & Garikipati, K. (2019). A graph theoretic framework for representation, exploration and analysis on computed states of physical systems. Computer Methods in Applied Mechanics and Engineering, 351, 501-530.</p>
<p>[18] Paulson, N. H., Priddy, M. W., McDowell, D. L., & Kalidindi, S. R. (2018). Data-driven reduced-order models for rank-ordering the high cycle fatigue performance of polycrystalline microstructures. Materials & Design, 154, 170-183.</p>
</div></div></div><div class="field field-name-upload field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><table class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/Poly_xsinx_8TrainPoints_noiseless_2.png" type="image/png; length=55832" title="Poly_xsinx_8TrainPoints_noiseless.png">Polynomial fitting with 8 training points (noiseless)</a></span></td><td>54.52 KB</td> </tr>
<tr class="even"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/GPR_NN_8TrainPoints_noiseless_0.png" type="image/png; length=259319" title="GPR_NN_8TrainPoints_noiseless.png">GPR vs NN with 8 training points (noiseless)</a></span></td><td>253.24 KB</td> </tr>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/GPR_8vs20_TrainPoints_noiseless_0.png" type="image/png; length=331617" title="GPR_8vs20_TrainPoints_noiseless.png">GPR needs more points when learning xsin3x</a></span></td><td>323.84 KB</td> </tr>
<tr class="even"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/Poly_vs_GPR_vs_NN_20TrainPoints_NOISY_0.png" type="image/png; length=618174" title="Poly_vs_GPR_vs_NN_20TrainPoints_NOISY.png">Comparison of the 3 algorithms for noisy dataset</a></span></td><td>603.69 KB</td> </tr>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/macro_supercompressible_metamaterial_0.png" type="image/png; length=2040765" title="macro_supercompressible_metamaterial.png">Macroscopic metamaterial</a></span></td><td>1.95 MB</td> </tr>
<tr class="even"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/micro_supercompressible_metamaterial_0.png" type="image/png; length=1864051" title="micro_supercompressible_metamaterial.png">Microscopic metamaterial</a></span></td><td>1.78 MB</td> </tr>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/clustering_example_0.png" type="image/png; length=1020307" title="clustering_example.png">Clustering step in SCA method</a></span></td><td>996.39 KB</td> </tr>
<tr class="even"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/composite_toughness_0.png" type="image/png; length=184791" title="composite_toughness.png">Data-driven analysis of composite toughness</a></span></td><td>180.46 KB</td> </tr>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="Image icon" title="image/png" src="/modules/file/icons/image-x-generic.png" /> <a href="https://imechanica.org/files/RNN_plasticity_yield_surface_0.png" type="image/png; length=103815" title="RNN_plasticity_yield_surface.png">RNN predicts path-dependent plasticity</a></span></td><td>101.38 KB</td> </tr>
<tr class="even"><td><span class="file"><img class="file-icon" alt="Plain text icon" title="text/plain" src="/modules/file/icons/text-plain.png" /> <a href="https://imechanica.org/files/Intro2ML_Regression_0.txt" type="text/plain; length=12028" title="Intro2ML_Regression.txt">Script for introduction to ML - please change extension from .txt to .py to run the code</a></span></td><td>11.75 KB</td> </tr>
</tbody>
</table>
</div></div></div>Fri, 31 Jan 2020 14:44:28 +0000mbessa23957 at https://imechanica.orghttps://imechanica.org/node/23957#commentshttps://imechanica.org/crss/node/23957The Machine Learning as an Expert System
https://imechanica.org/node/23261
<div class="field field-name-taxonomy-vocabulary-8 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/10902">machine learning</a></div><div class="field-item odd"><a href="/taxonomy/term/12491">expert systems</a></div><div class="field-item even"><a href="/taxonomy/term/11581">data science</a></div><div class="field-item odd"><a href="/taxonomy/term/11938">artificial intelligence</a></div><div class="field-item even"><a href="/taxonomy/term/12101">deep learning</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><strong>1.</strong></p>
<p>To cut a somewhat long story short, I think that I can ``see'' that Machine Learning (including Deep Learning) can actually be regarded as a rules-based expert system, albeit of a special kind.</p>
<p>I am sure that people must have written articles expressing this view. However, simple googling didn’t get me to any useful material.</p>
<p>I would deeply appreciate it if someone could please point out references in this direction. Thanks in advance.</p>
<p><strong>2.</strong></p>
<p>BTW, here is a very neat infographic on AI: [<a href="https://techjury.net/stats-about/ai/" rel="noopener" target="_blank">^</a>]; h/t [<a href="https://www.visualcapitalist.com/ai-revolution-infographic/" rel="noopener" target="_blank">^</a>]. ... Once you finish reading it, re-read this post, please! Exactly once again, and only the first part---i.e., without recursion!. ...</p>
<p>[Originally published today at my personal blog, here [<a href="https://ajitjadhav.wordpress.com/2019/04/16/the-machine-learning-as-an-expert-system/" target="_blank">^</a>].)</p>
<p>Best,</p>
<p>--Ajit</p>
<p> </p>
</div></div></div>Tue, 16 Apr 2019 18:34:56 +0000Ajit R. Jadhav23261 at https://imechanica.orghttps://imechanica.org/node/23261#commentshttps://imechanica.org/crss/node/23261Postdoctoral Fellow/Research Associate – Lamina Cribrosa Biomechanics: A Diagnostic Biomarker for Glaucoma? – National University of Singapore
https://imechanica.org/node/22436
<div class="field field-name-taxonomy-vocabulary-6 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/73">job</a></div></div></div><div class="field field-name-taxonomy-vocabulary-8 field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/8138">Ocular Biomechanics</a></div><div class="field-item odd"><a href="/taxonomy/term/9343">Translational Biomechanics</a></div><div class="field-item even"><a href="/taxonomy/term/8140">In vivo biomechanics</a></div><div class="field-item odd"><a href="/taxonomy/term/12101">deep learning</a></div><div class="field-item even"><a href="/taxonomy/term/11938">artificial intelligence</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p class="MsoNormal"><strong><span lang="EN-US" xml:lang="EN-US">Job description: </span></strong><span lang="EN-US" xml:lang="EN-US">We are looking for a bright, dynamic, and highly motivated individual to perform research in translational biomechanics with applications to ophthalmology. For more information about our Laboratory, please visit: </span><span lang="EN-US" xml:lang="EN-US"><a href="http://www.bioeng.nus.edu.sg/ivb/"><span>http://www.bioeng.nus.edu.sg/oeil/</span></a></span><span lang="EN-US" xml:lang="EN-US">. </span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">The proposed study aims to assess the biomechanics of the lamina cribrosa (LC) in vivo. The lamina cribrosa (LC), at the back of the eye, provides mechanical supports to the axons that pass through and transmit visual information to the brain. Because the LC is porous, it is mechanically weak, and is the major site of axonal damage in glaucoma – the most common cause of irreversible blindness. Throughout the day and night, the LC is exposed to three major loads: the intraocular pressure (IOP; pushing the LC posteriorly), the cerebrospinal fluid pressure (CSFP; pushing the LC anteriorly), and during eye movements, by the traction from the optic nerve distorting the LC. We believe that excessive and cumulative deformations of the LC from any of these loads could result in glaucomatous axonal damage. We also believe that the biomechanical behaviour of the LC is different between healthy and glaucoma subjects, and that accurate in vivo measurements of LC biomechanics could help us better diagnose glaucoma. </span></p>
<p><span lang="EN-US" xml:lang="EN-US">For this project, the successful candidate will use, improve and develop 3D image processing algorithms to assess the biomechanics of the LC in response to changes in IOP, CSFP and gaze position in a large population of normal and glaucoma patients (imaged under various conditions with 3D optical coherence tomography). The candidate will also assess how information about LC biomechanics could be used to better understand and diagnose glaucoma. This will be done using custom artificial intelligence (deep learning) algorithms in collaboration with the OCTAGON team (</span><span lang="EN-US" xml:lang="EN-US"><a href="http://www.bioeng.nus.edu.sg/OEIL/OCTAGON.html"><span>http://www.bioeng.nus.edu.sg/OEIL/OCTAGON.html</span></a></span><span lang="EN-US" xml:lang="EN-US">). Finally, the candidate will also be expected to provide assistance while patients are being imaged. <span> </span></span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">This is a project in collaboration with the Singapore Eye Research Institute (top 5 eye institute worldwide). </span></p>
<p class="MsoNormal"><strong><span lang="EN-US" xml:lang="EN-US">Qualification: </span></strong><span lang="EN-US" xml:lang="EN-US">Excellent programming skills (in C++ and Matlab/Python) and knowledge of 3D image tracking techniques (e.g. digital volume correlation) are required. The candidate is also expected to have a strong foundation in nonlinear continuum mechanics and finite element modeling. Knowledge of the Virtual Fields Method, code parallelization, optical coherence tomography, and deep learning is considered a plus. Hands-on experience in mechanical or biomechanical testing is also considered a plus. No background in ophthalmology is required. Candidates with PhD in Biomedical Engineering, Computer Science, Mechanical Engineering, Civil Engineering, or other related disciplines are encouraged to apply. </span></p>
<p class="MsoNormal"><strong><span lang="EN-US" xml:lang="EN-US">Earliest Starting Date: </span></strong><span lang="EN-US" xml:lang="EN-US">July 2018.</span></p>
<p class="MsoNormal"><strong><span lang="EN-US" xml:lang="EN-US">Duration: </span></strong><span lang="EN-US" xml:lang="EN-US">3 years. <strong><span> </span></strong></span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">To apply, please email a detailed CV and the names of two references to: </span></p>
<p> </p>
<p class="MsoNormal"><strong><span lang="EN-US" xml:lang="EN-US">Dr. Michael JA Girard</span></strong></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">Ophthalmic Engineering & Innovation Laboratory</span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">Department of Biomedical Engineering</span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">National University of Singapore</span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">Email: </span><span lang="EN-US" xml:lang="EN-US"><a href="mailto:jobs@invivobiomechanics.com"><span>jobs@invivobiomechanics.com</span></a></span></p>
<p class="MsoNormal"><span lang="EN-US" xml:lang="EN-US">Homepage: </span><span lang="EN-US" xml:lang="EN-US"><a href="http://www.bioeng.nus.edu.sg/ivb/"><span>http://www.bioeng.nus.edu.sg/oeil/</span></a></span></p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
</div></div></div>Tue, 12 Jun 2018 08:45:33 +0000mgirard22436 at https://imechanica.orghttps://imechanica.org/node/22436#commentshttps://imechanica.org/crss/node/22436