User login

Navigation

You are here

​​Journal Club for May 2022: Machine Learning in Mechanics: curating datasets and defining challenge problems

elejeune's picture

 

Over the past several years, machine learning (ML) applied to problems in mechanics has massively grown in popularity. Here is a figure from a slide that I made in early 2020 referencing a few examples from the literature — already this slide feels out of date! All of these authors (and many many others) have published new papers on this topic. 

As more researchers apply ML methods to problems in mechanics, I believe two methodological questions have become increasingly important: 

I. When do challenges specific to mechanics motivate new ML method development? I.e., How is mechanical data special?

and

II. How should we store and disseminate curated mechanical datasets from experiments and simulations? 

 In this Journal Club, I will focus primarily on theme II. It is my sincere hope to not only stimulate discussion on these topics, but also to crowdsource examples to add to an informal list of Open Access Mechanics Datasets that we have been working on (https://elejeune11.github.io/). If you know of a dataset that would be appropriate for this list, please use the comments section of this post to let me know and I will add it. In addition, ideas for new benchmark datasets for both research and education around common themes in the literature would also be a welcome contribution. 

 

1. BACKGROUND MATERIALS AND EXAMPLES OF RECENT ML APPLICATIONS IN MECHANICS

If you are new to the topic of ML and have a background in mechanics, perhaps the best starting point is Miguel Bessa’s February 2020 Journal Club post title “Machine Learning in Mechanics: simple resources, examples & opportunities” (https://imechanica.org/node/23957). Miguel also gave a great talk last year on some of his contributions to research at the ML/Mechanics interface that is currently available on YouTube (https://www.youtube.com/watch?v=GWpeGFFXZSM) that includes several inspiring examples of how ML can have impact in the field. In November 2021, Wei Gao also contributed a very interesting and informative Journal Club post focused on applying ML to atomistic materials modeling (https://imechanica.org/node/25544).

There has been so much recent research activity at the interface of ML and mechanics that I have placed summarizing all of it outside the scope of this blog post. However, from a very general standpoint, I would like to highlight a few common themes:

*Supervised Learning for Multi-Scale Modeling, Design, Inverse Analysis, Optimization, and/or Uncertainty Quantification (https://doi.org/10.1007/s00158-001-0160-4, https://doi.org/10.1021/acsnano.1c06340, https://doi.org/10.1039/C8MH00653A, https://doi.org/10.1016/j.matdes.2020.108509, https://doi.org/10.1016/j.cma.2020.113362, https://doi.org/10.1137/20M1354210, https://doi.org/10.1039/D0ME00020E

*ML-Based Constitutive Modeling (https://doi.org/10.1061/(ASCE)0733-9399(1991)117:1(132), https://doi.org/10.1016/j.cma.2018.11.026, https://doi.org/10.1115/1.4052684, https://doi.org/10.1016/j.cma.2021.114217, https://doi.org/10.1038/s41524-022-00752-4

*Physics Informs Neural Networks (PINNs) for forward and inverse problems (https://doi.org/10.1016/j.jcp.2018.10.045, https://doi.org/10.1038/s42254-021-00314-5

*ML-Assisted Material Characterization and Discovery (https://doi.org/10.1016/j.matchar.2019.109984, https://arxiv.org/abs/2111.05949

Across these themes, ML methods have been applied to areas as diverse as biomechanics (https://doi.org/10.1038/s41746-019-0193-y, https://doi.org/10.1007/s10237-019-01190-w, https://doi.org/10.1007/s10237-018-1061-4, https://doi.org/10.1016/j.cma.2022.114871, https://doi.org/10.1016/j.jbiomech.2020.110124), additive manufacturing (https://doi.org/10.1038/s41524-021-00548-y, https://arxiv.org/abs/2204.05152), and mechanical design (https://arxiv.org/abs/2202.09427, https://doi.org/10.1002/adfm.202111610, https://doi.org/10.1016/j.cma.2020.113377, https://doi.org/10.1016/j.jmatprotec.2022.117497). Again, the papers referenced here are a tiny selection of the literature. I strongly encourage anyone with a particularly relevant paper or additional theme to share in the comments! 

1-REMARK. CURRENT DATA CURATION AND DISSEMINATION PRACTICES 

At present, most papers that apply ML methods to mechanics problems showcase these methods on unique and privately held datasets. On one hand, this is a logical approach as the field of mechanics is so diverse, everyone is working on a niche that may not have substantial scientific overlap with other recently published work. On the other hand, this approach can be limiting because it makes it difficult to quantitatively compare different methods and attain collective knowledge. For example, it is not necessarily clear what type of ML model and ML model architecture/hyperparameters is the best starting point for making predictions based on mechanical data. 

There has been growing interest in addressing this by defining benchmark datasets and benchmark problems for research at the ML/mechanics interface. For example, at the 2019 NSF Computational Mechanics Vision Workshop, the topic was brought up multiple times (see report: https://micde.umich.edu/nsf-compmech-workshop-2019/). There have also been many related endeavors, including the Materials Genome Initiative (https://www.mgi.gov/), the Materials Project (https://materialsproject.org/),  NanoMine (https://materialsmine.org/wi/home), the DIC Challenge (https://idics.org/challenge/), the Air Force Research Laboratory Additive Manufacturing Modeling Challenge Series (https://materials-data-facility.github.io/MID3AS-AM-Challenge/), and the Sandia Fracture Challenge (https://doi.org/10.1007/s10704-019-00361-1). These endeavors have helped the research community organize data, and identify effective methods to address challenges from multiple candidate possibilities. Critically, funding agencies have also taken interest in data curation and dissemination (e.g., in the US, this was addressed in a recent NSF Dear Colleague Letter https://www.nsf.gov/pubs/2019/nsf19069/nsf19069.jsp). And, there have been a few recently initiated projects to address the lack of open access mechanics-based datasets (e.g., https://pamspublic.science.energy.gov/WebPAMSExternal/Interface/Common/ViewPublicAbstract.aspx?rv=f364982b-b455-4161-83e2-ef1cb1846f93&rtc=24&PRoleId=10). The goal of this Journal Club post is to foster further discussion on this topic. 

 

2. IMPACT OF BENCHMARK DATASETS AND CHALLENGE PROBLEMS IN OTHER FIELDS

The development of many of the ML algorithms that are currently popular in mechanics research (e.g., convolutional neural networks) has largely been motivated by problems in computer vision. (However, as a brief side note: the popular Principal Component Analysis algorithm was inspired by analogous problems in mechanics! https://en.wikipedia.org/wiki/Principal_component_analysis#History). One of the reasons why computer vision has been a leading application of ML approaches is that there are multiple readily available benchmark datasets focused on problems in computer vision. For example: 

*MNIST (https://en.wikipedia.org/wiki/MNIST_database) is a collection of 70K (60K training, 10K testing) labeled handwritten digits from 0-9 each described as a 28x28 input bitmap. This dataset is small enough to be downloaded and analyzed on a standard laptop, and is often used as the example dataset in ML tutorials. 

*ImageNet (https://en.wikipedia.org/wiki/ImageNet) is a collection of over 14 million labeled images with 1K-20K categories depending on category definitions. Notably, this dataset served as the basis for the “ImageNet Large Scale Visual Recognition Challenge” (https://www.image-net.org/challenges/LSVRC/) which marked massive breakthroughs in the predictive ability of ML models. Though outside the scope of this Journal Club, the history of the development of this massive dataset is quite interesting (https://www.historyofdatascience.com/imagenet-a-pioneering-vision-for-computers/). 

Beyond these two perhaps most well recognized examples, there have also been multiple endeavors to define benchmark problems for different classes of ML relevant challenges. For example: 

*Scene flow benchmark datasets: https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html 

*Data distribution shift benchmark datasets: https://wilds.stanford.edu/datasets/  

*And many many others: https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research 

Overall, the accessibility of these datasets has massively enabled both research and education. For example, if you want to learn how to implement a convolutional neural network, you can download MNIST with one line of code and learn how to train an established ML model on the dataset in a matter of minutes. Alternatively, if you have a new idea for a ML algorithm, you can readily compare your approach to other approaches defined in the literature. This is somewhat analogous to evaluating novel mechanical simulation methods on popular benchmark problems (e.g., Cook’s Membrane, Lee's Frame, Patch test).   

2-REMARK. LIMITATIONS OF RELYING ON BENCHMARK DATASETS

Of course, along with the benefits of open access benchmark datasets, there are multiple potential limitations of over-reliance on them. First, benchmark datasets may be “easy” in comparison to real world challenges and thus may provide researchers with a false sense of accomplishment if an algorithm performs well on these data. To address this, there have been multiple endeavors to curate and disseminate more challenging datasets. For example, the Fashion MNIST dataset (https://github.com/zalandoresearch/fashion-mnist) was created as a more challenging drop-in replacement for MNIST. And, the ImageNet Large Scale Visual Recognition Challenge was retired in 2017 in favor of promoting more challenging problems such as 3D image analysis. Second, benchmark datasets may contain strange quirks and/or severe biases that will then be learned by the ML models. For example, if there are low proportions of certain demographics in facial recognition benchmark datasets, the resulting ML models may subsequently exhibit biased predictions. In the context of mechanics datasets, acquiring biases is also an important concern, especially for experimental data where there are many opportunities to unintentionally add spurious features (e.g., variable lighting conditions for full field images). Overall, it is important to acknowledge that high accuracy on a single benchmark task still requires critical evaluation in the context of addressing real world challenges. 

 

3. CHALLENGE AND OPPORTUNITY: CURATED DATASETS FOR PROBLEMS IN MECHANICS

As outlined in the previous section, benchmark datasets have enabled significant methodological advances in other fields. Could readily available benchmark datasets enable methodological advances in predicting mechanical behavior? In this Journal Club I would also like to take the opportunity to think even bigger: Could curated and accessible mechanics datasets lead to unprecedented discovery? 

3a. BENCHMARK DATASETS FOR SHOWCASING AND EVALUATING COMPUTATIONAL METHODS

Despite the massive growth in popularity of research at the interface of mechanics and ML, there is not a clear picture of which ML methods perform best on mechanics problems. Because most researchers report the results of their investigations on privately held datasets, it is difficult to (1) directly reproduce results while debugging ML model implementations, and (2) directly compare the performance of different methods because error metrics will be reported on different datasets. This is limiting for researchers who want to develop new methods and demonstrate that their proposed approach exceeds the state of the art. And, it is limiting for researchers who are method agnostic and simply want to use the best available tool to address a particular problem.  

In our research group, we have recently taken a small step to address this lack of benchmark data. Specifically, we have created multiple open access datasets based on curated finite element simulation data and published these datasets online under Creative Commons Attribution-ShareAlike 4.0 Licenses so that others are free to download and use them for their own pursuits. In our first dataset collection, we have taken direct inspiration from the MNIST dataset described above and created the “Mechanical MNIST” collection. In establishing this dataset collection, our goal was to take advantage of the benefits of the well known MNIST dataset (small enough to be managed on a standard computer, large enough to meaningfully train neural networks) and create a toy problem relevant to mechanics research. Therefore, our initial dataset involved treating the 28x28 MNIST input bitmaps as blocks of heterogeneous material (stiff embedded digit, soft background matrix) and deforming these domains following different boundary conditions. In our initial curated datasets, every input bitmap is mapped to multiple outputs: full field displacements, change in strain energy, and reaction forces. Since then, we have expanded on these themes to include: multiple simulation fidelities, the Fashion MNIST input bitmap pattern, simulations with phase field fracture, and Cahn-Hilliard input bitmap patterns. 

To date, this dataset collection includes: 

*Mechanical MNIST — Uniaxial Extension https://open.bu.edu/handle/2144/38693

*Mechanical MNIST — Equibiaxial Extension https://open.bu.edu/handle/2144/39428 

*Mechanical MNIST — Shear https://open.bu.edu/handle/2144/39429 

Mechanical MNIST — Confined Compression https://open.bu.edu/handle/2144/39427 

*Mechanical MNIST — Multi-Fidelity https://open.bu.edu/handle/2144/41357 

*Mechanical MNIST — Fashion https://open.bu.edu/handle/2144/41450 

*Mechanical MNIST — Crack Path https://open.bu.edu/handle/2144/42757 

*Mechanical MNIST — Cahn-Hilliard https://open.bu.edu/handle/2144/43971 

 In conjunction with publishing these datasets, we have also explored different ML methods for predicting the mechanical behavior of heterogeneous domains. For example, we have looked at transfer learning as an approach to leverage low fidelity simulation data (https://doi.org/10.1016/j.jmbbm.2020.104276), we have designed a neural network architecture specifically for predicting full field quantities of interest such as full field displacement, strain, and damage field (https://doi.org/10.1016/j.eml.2021.101566), and we have explored the efficacy of Generative Adversarial Networks for augmenting small training datasets (https://arxiv.org/abs/2203.04183). For all endeavors, we view our methodological results simply as baselines — we anticipate that in the coming years novel ML algorithms will be introduced that can exceed the performance of these approaches. In addition to the Mechanical MNIST collection, we have published two datasets that focus on different problems: 

*Buckling Instability Classification (BIC) — https://open.bu.edu/handle/2144/40085 — a simple mechanics-based classification dataset that we anticipate will be most relevant as an educational example (https://doi.org/10.1016/j.cad.2020.102948). 

*Asymmetric Buckling Columns (ABC) — https://open.bu.edu/handle/2144/43730 — another classification dataset with columns of complex geometry that we use to explore graph neural network based approaches to predicting mechanical behavior (https://arxiv.org/abs/2202.01380). 

Despite the diversity of these datasets, we are acutely aware that these examples cover only a tiny fraction of problems of interest for mechanics researchers (e.g., there are no present examples in our dataset with coupled problems, and we have not yet created an experimental version of any of these datasets). To this end, we have also been working on an informal list of Open Access Mechanics Datasets (https://elejeune11.github.io/) that summarizes work in this area by both us and others. So far, colleagues have shared examples of experimental data used to inform constitutive models of soft tissue (see: https://doi.org/10.1016/j.actbio.2020.12.006, https://doi.org/10.1016/j.jmbbm.2020.104216, and https://doi.org/10.1016/j.actbio.2019.10.020), and from high-throughput experiments of crushing additively manufactured cross barrels with different geometries (https://doi.org/10.1126/sciadv.aaz1708). As stated previously, we would love to add additional datasets to this list — if you know of a relevant dataset that is not included please either add it to the comments of this post or get in touch over email.  

3b. CURATED DATASETS FOR MECHANICAL DISCOVERY 

As demonstrated by recent Journal Club posts, there is no shortage of interesting and unsolved problems in mechanics. There are not only many novel types of materials and structures, but also materials with variable mechanical behavior that require extensive investigation before they can be regarded as well understood (e.g., additively manufactured materials, complex composites, biological tissue). And, there are many aspects of structure-level nonlinear mechanical response that remain poorly understood or are yet to be discovered. Simultaneously, we have reached a point where both experimental and computational techniques are able to generate massive amounts of data for a single investigation (e.g., full field deformations in the experimental setting, high fidelity finite element models in the computational setting). And, researchers have developed impressive frameworks for conducting high throughput experiments that generate massive datasets (https://doi.org/10.1126/sciadv.aaz1708 and https://doi.org/10.1016/j.matt.2021.12.017). 

In the previous section, I made a case for disseminating curated datasets to benchmark ML models. However, the true potential of mechanical data curation and open access dissemination goes well beyond that. Could ML methods be used to discover patterns either within or across datasets? Could ML methods be used to create predictive models with diverse input data streams? We have seen some of this potential in using unsupervised ML methods to identify patterns in data realized (e.g., https://doi.org/10.1016/j.matchar.2019.109984 and https://doi.org/10.1016/j.cma.2016.04.004). And, we have seen impressive results from researchers who have developed multi-scale and multi-fidelity predictive frameworks of system behavior (e.g., https://doi.org/10.1002/aenm.202003908, https://doi.org/10.1016/j.jmatprotec.2021.117485). If more curated mechanical data was broadly available, what would be possible? Beyond ML applications, could access to diverse mechanical datasets help either validate or falsify theoretical predictions of mechanical behavior? Would open access to mechanical datasets enable new directions for metamaterial design? Many systems where mechanics is coupled to other fields and/or where mechanical behavior changes with respect to time remain poorly understood. Would open access to mechanical datasets across different conditions enable unprecedented predictive modeling? 

 

4. DISCUSSION QUESTIONS 

The goal of this Journal Club is to foster discussion on curating mechanics based datasets for ML applications and beyond. Here are a few additional discussion questions: 

*What resources or upcoming meetings are a good opportunity for others to learn more about the topic? 

*For those skeptical about the utility of ML for problems in mechanics in particular (e.g., https://arxiv.org/abs/2112.12054), what would impress you? Can you design a dataset, problem statement, or benchmark challenge problem where ML based predictions would be impactful? 

*For everyone, what types of benchmark datasets would you like to see in the future? What should new benchmark datasets and associated challenge problems contain? 

*Curating datasets is time, labor, and resource intensive (e.g., see FAIR guidelines https://www.go-fair.org/fair-principles/, https://sites.bu.edu/lejeunelab/files/2022/04/Lejeune_Data_Management_Plan.pdf) — should limited resources (i.e., time, money, storage space) be allocated to these endeavors?

*What is the most useful way for mechanical data to be formatted? What necessary metadata should accompany each dataset? 

*When does it make sense to curate and preserve data, and when is it unnecessary (e.g., a single FEA simulation can yield GB of data)? 

*Are there examples of data repositories for other fields that could be adapted/emulated for problems in mechanics? For example, the Materials Genome Initiative (​​https://www.mgi.gov/). 

*Do you see a role for benchmark datasets in mechanics education? For example, would benchmark datasets be a good resource for a first-year graduate student interested in research at the mechanics/ML interface? What should benchmark datasets for education contain? 

*Do you have a publicly available dataset that we can add to this informal list of curated mechanical data (https://elejeune11.github.io/)? If so, I would love to include it! 

 As a brief note, colleagues Juner Zhu, M. Khalid Jawed, Hongyi Xu and I are organizing a mini-symposium at SES 2022 on “Data-Driven Approaches for Complex Multiphysics Systems, Structures, and Materials.” Abstract submission is now open so please consider joining us at SES to continue discussion on research at the mechanics/ML interface (Symposia 3.3 https://na.eventscloud.com/eSites/658176/Homepage). 

Finally, please feel free to share other papers, methods of interest, and other upcoming events. The field is growing so quickly and there are many wonderful examples to highlight that I did not include above.

Here is a quick summary of some of the additional resources that followed from this Journal Club during May 2022 — thank you to everyone who participated! 

* We added datasets 8-13 to the informal list (https://elejeune11.github.io/) — thank you everyone for the suggestions + please continue to get in touch if you have more examples that would be appropriate! 

* Ajay mentioned the DesignSafe Data Depot (https://www.designsafe-ci.org/data/browser/public/) which is a fantastic resource for natural hazard relevant datasets. This topic is also discussed further in his June 2022 Journal Club post (https://imechanica.org/node/26009). 

* Steve linked to several community relevant community resources: (1) IACM conference for mechanistic machine learning and digital twins (https://mmldt.eng.ucsd.edu/home), (2) a short course on ML in mechanics (https://mmldtshortcourse.weebly.com/lecture-notes.html), (3) a LLNL seminar series on data-driven physical simulations (https://data-science.llnl.gov/latest/news/virtual-seminar-series-explores-data-driven-physical-simulations)

* Overall, multiple people contributed very thoughtful posts to the comments on how they have interacted with mechanics datasets! Please check them out and continue to contribute as is appropriate. 

Comments

Markus J. Buehler's picture

Thank you Emma for the thoughtful and informative post! I think you summarize the challenges and opportunities very well. Seems to me this could lead to a great discussion at the upcoming SES meeting. Another area for collaborations could be the development of classes for G or even UG students - especially since many of our students will be exposed to such tools in the future. I will be offering a course at MIT next year and would be happy to discuss and exchange notes. 

elejeune's picture

Thanks Markus for your kind comment! It would be great to further discuss these challenges and aspects of a ML/Mechanics course at an upcoming meeting — I look forward to hearing what you are planning at MIT! 

Ajay B Harish's picture

Thank you very much for the great discussion that you have initiated here, Emma. You have asked some very legitimate questions regarding data availability & sharing. We have been having a lot of discussions about this on this side of the pond in recent days including best practices in sharing simulation data and aiming at reproducibility. Just wanted to share a couple of my thoughts.

1. One of the areas that I work on is related to natural hazards modeling. The NSF funded "DesignSafe" (https://www.designsafe-ci.org/data/browser/public/) has developed a data portal that is particularly aimed at sharing datasets. I think DesignSafe has been around for 6-7 years now and has been reasonably successful. You can see the amount of data that is being shared. It is pretty remarkable. Maybe you can add a link to this.

2. One of the other impediments that we have been dsicussuing related to data sharing and curating such databases is the "incentives" for many faculty to do this. To share a dataset that can be used requires quite a bit of work to organize and document them. The effort require to curate a good quality dataset can be almost as much as putting up a journal paper. Almost always, publications have been the metric of measure and the question that arises is the incentive for sharing this data. I think you raise the same point that these are labor intensive tasks. Most PI's would want their students/post-docs to focus on publications/patents rather than curating a dataset. This is understandable and we need to ask if these young researchers would benefit if we ask them to spend time on these. These are some hard questions, I guess.

3. This again particularly goes in the direction of reproducibility as well. There are papers from even well-known groups that are often not easily reproducible since we might not have the codes related to that or the data that they have used. Today, many of the ASCE journals are asking authors to make datasets used in their paper available. We recently had the experience that the authors of a paper never responded upon repeated request for data and codes to compare with our work. I kept writing on a weekly basis. I eventually roped in the editor who also wrote a couple of times but without any use. Just putting in a line that we will make it available upon reasonable request feels kind of useless if authors would not respond. I wonder if Editors could go one step ahead to remove those papers due to lack of compliance in such cases? Alternatively, could we say that the authors have to put the data onto a repository like Zenodo before publishing. This way, it can only be updated but not deleted.

4. I think benchmark datasets are very important. But even the mechanics community itself is so large and I wonder if one such repository is possible? Something that could even be linked with the iMechanica initiative? But again the question arises on who controls the quality? If it was an open-repo without a peer review, then one can add anything. But if it was a peer review, how can this be done efficiently?

5. I do think that this can significantly impact education as well. Students could find good resources to compare their work and trust that this is a repository. But like Pt. 4, creating these could be the next hard question.

elejeune's picture

Thank you Ajay for this very insightful post! I’m very happy to hear that others have been discussing this topic. In response to your comments: 

1. Thank you for sharing this link! In browsing through the datasets available, it looks like there is a nice crossover between “DesignSafe-CI: A Natural Hazards Engineering Research Infrastructure (NHERI)” and “mechanics”!  In addition to many examples of natural hazard reconnaissance data, at first glance (I only scrolled 2022-2021) I can find: 

*UoA-UW Reinforced Concrete Wall Database: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-2430 

*Direct Simple Shear Testing on Ottawa F50 and F65 Sand: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-2911 

*Centrifuge Testing of Liquefaction-Induced Downdrag on Axially Loaded Piles: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-2828 

*Shake-table Tests of Seven-story Reinforced Concrete Structures with Torsional Irregularities: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-1903 

*Liquefaction Evaluations of Finely Interlayered Sands, Silts and Clays: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-1844 

*Database of Diagonally-Reinforced Concrete Coupling Beams: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3053 

*LEAP-2020: Cyclic Triaxial and Direct Simple Shear Tests Performed at GWU: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-2557 

*University of Auckland: Precast Concrete Wall Tests - Grouted Connections: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-2575 

*Compressibility-Based Interpretation of Cone Penetrometer Calibration Chamber Tests and Corresponding Boundary Effects: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3475 

*Camera-based real-time damage identification of building structures through deep learning: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3446 

*Collaborative Research: Simulating Crack Propagation in Steel Structures Under Ultra-Low Cycle Fatigue and Low-Triaxiality Loading from Earthquakes and Other Hazards: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3394 

2. I think your analysis here is spot on — as students in my group will attest the process of preparing a dataset is quite time intensive! However, I do think it is time well spent because many skills are learned along the way (e.g., critical thinking about what is important to store, how files should be formatted for efficiency, practice with writing bash scripts, etc.). Of course, my students may disagree with me :) And, for people who are not working on ML directly, the skills acquired while doing this may not be worth the time investment required. 

3. That sounds frustrating! One thing that I have grown to appreciate over the past two years of posting these datasets is that my own group is often the primary beneficiary of our previous data curation endeavors. For example, if a new student joins the group and wants to try out a ML method on our data, they can simply go to a website and find it already nicely formatted for them rather than having to track down an old storage drive :). Also, thank you for mentioning zenodo (https://zenodo.org/) it’s a great resource for sharing data!

4. This is a great question! One solution may be to peer review these datasets as a part of the publication process if the dataset is associated with a manuscript. However, more peer-reviewing responsibility may be the last thing anyone wants right now :) 

5. I agree! Again, this comes back to your initial point on the time and resource intensive nature of data curation. 

 

Ajay B Harish's picture

I am glad to know that you found the DesignSafe database useful. Yes, it has a lot of data from civil, structural and coastal engineers. This includes data related to experiments and computations. These are some nice ones that you have identified and I am happy to see that there is also a contribution from someone from Auckland!

Do you have a template that you ask the students to follow when creating these datasets? It would be particularly important to have a standard way of doing these to ensure uniformity across them.

 

elejeune's picture

Yes! Thanks again for sharing it — it is super relevant to this topic! 

With regard to following a template, I have four comments:

1. Because we are based at Boston University, we have been using the OpenBU Institutional Repository (https://open.bu.edu/). For each submission, we follow the OpenBU template that includes components such as a thumbnail image, an abstract, data rights, a hierarchy of dataset “collections,” and links to the relevant code (see attached figure).

 

2. Broadly speaking, we have been guided by trying to adhere to FAIR Principles (https://www.go-fair.org/fair-principles/). 

3. Thus far, the scope of our work is relatively small (i.e., we share medium sized computationally generated datasets where researchers can quickly download input files and output files to use for training ML models). Therefore, formatting these particular datasets is much less of a challenge than it could be for mechanics data broadly defined. 

4. For one of our recent datasets (Mechanical MNIST Crack Path) we actually ended up publishing two versions of the dataset: a “lite” version (https://open.bu.edu/handle/2144/42757) that matches the format of other datasets in the Mechanical MNIST collection, and an “extended” version (https://datadryad.org/stash/dataset/doi:10.5061/dryad.rv15dv486) that offers much more flexibility, at the expense of a slightly higher barrier in getting started. 

 

Do you know of any additional resources that are useful in this direction? In addition, I am also curious if you (or others!) have thoughts on the accessibility vs. flexibility tradeoff in data curation mentioned above. 

Thank you, Emma. This is a fantastic resource. It is no secret that I am huge fan of your work which has been as rigorous as it has been creative. In fact, it was you who first inspired me and my lab to pledge to make all my future data and code openly available. With your help, we have made several rich sets of mechanical data available that the (bio)mechanics community can hopefully use moving forward. Specifically, we have made simple and pure shear testing data of blood clot and of right ventricular myocardium freely available for anybody to download and use. Collecting clean accurate data is hard and requires significant investment of money and time. Thus, our hope is two-fold: (i) we are hoping that folks can use our data directly to inform constitutive models for medical simulations, (ii) we are hoping that folks can use our data as a benchmark for example to train and validate new machine learning algorithms. If you are curious about our work, please check out the following publications as well as our data repository where you can download all the test data you could want!!! (many thanks to my graduate students Sotiris Kakaletsis and Gabriella Sugerman who have collected and analysed these data):

In summary, your work, including this journal club, has really opened my eyes to the importance of sharing one’s data and helped me recognize the crucial role of data sharing in ensuring the longevity and broader impact of our work. Well done and many thanks for your leadership by example.

elejeune's picture

Thanks so much Manuel for your kind words! And thank you for sharing the fantastic work that you have been doing in your lab on making open access mechanics datasets. Three brief follow ups: 

1. I want to re-emphasize your point on the investment of money and time that goes in to collecting these experimental datasets — even “large” experimental datasets like the ones that you have shared are relatively small compared to what is available for “big data” in other fields. Overall, I think ML methods that can leverage these small high quality datasets (perhaps in conjunction with standard simulation methods) are quite relevant to the mechanics field.

2. I also think it’s really great that in addition to providing these data, you and your team have invested significant additional effort in making these datasets accessible to others through documentation (e.g., https://dataverse.tdl.org/file.xhtml?fileId=105543&version=1.0). 

3. Finally, I want to point out that you made this data public through the “Texas Data Repository” (https://dataverse.tdl.org/). It seems that this is a great resource for others who are affiliated with universities that are  Texas Digital Library (TDL) member institutions (https://dataverse.tdl.org/). 

jessicaz@andrew.cmu.edu's picture

Thanks Emma for posting such an insightful discussion on this new, exciting research topic! Many researchers started to use ML in their research, but what are the new challenging problems and opportunities?  Your post provides a very thorough overview of ML in mechanics, particularly focusing on curating datasets, and also answers to these questions. This is a great resource with many details for young people who want to jump into this emerging research area. I will share your post with students in my lab and my teaching classes at Carnegie Mellon.

elejeune's picture

Thank you Jessica for your kind comments. I hope that your students also find the post helpful! There is so much exciting research going on with applying ML to mechanics, and so many opportunities for people to contribute new ideas!

Thanks Emma for the very insightful post!

I agree with you that just releasing these datasets fosters discovery. Just by looking at the description of the datasets we can come up with novel machine learning techniques to address that particular problem.

I would also like to point out that the availability of these datasets levels the playing field for researchers from universities with fewer resources, which may not have access to supercomputers to run thousands of simulations or to precise experimental setups. Initiatives like this may increase the pool of researchers interested in the intersection of mechanics and machine learning, which can only be beneficial for the field.

Finally, I would like to mention a benchmark dataset in the field of cardiac strain estimation from different imaging modalities: https://doi.org/10.1016/j.media.2013.03.008. This benchmark has been used by many other researchers and has become the gold standard dataset to compare algorithms in cardiac image registration. Even though many applications are directly related to imaging, there are incredible opportunities at the intersection of imaging+machine learning+mechanics, some of which we are working on!

elejeune's picture

Thanks you Francisco for your thoughtful points! In response: 

1. Yes! I look forward to seeing future creative approaches and perhaps more generalizable insights that are enabled by broader access to mechanics-based datasets. Hopefully data sharing can increase synergy between researchers with different expertise. 

2.  Thanks in particular for raising this point! And I completely agree — there are so many different and innovative ways to leverage both a fundamental understanding of mechanics + creative ideas for modifying open source ML software that are less costly to implement than the initial data generation step. 

3. Thanks for sharing this benchmark dataset! For others who may be interested, the dataset is hosted through the “Cardiac Atlas Project” http://www.cardiacatlas.org/ which has a specific “Motion Tracking Challenge” http://www.cardiacatlas.org/challenges/motion-tracking-challenge/. 

Finally, I very much look forward to seeing more work from your group on research at the intersection of imaging + machine learning + mechanics!

WaiChing Sun's picture

Hi Emma, thank you for sharing your vision on this important topic, and for taking the lead to provide the benchmark data with your own time and effort.

Your comment about sharing data and using the same data for benchmarking is spot on. It is almost impossible to create fair and meaningful comparisons of different ML models without a benchmark database. This is complicated by the fact that when we write a paper, we tend to focus on the advantages and promises on the proposed methods, and less so on making the work reproducible and robust, which do not always sound exciting, but is actually very important.

 I think establishing a set of benchmark problems with open-source data for validation and testing can be one step forward to resolve this problem. I also think that open-source the models or at least reporting all the detailed setup required to reproduce the exact results reported in the publications is very important to ensure reproducibility, interpretability, transparency and ultimately the trustworthiness of the proposed method. Without these active measures, it is often difficult to tell whether a model is really doing exceptional well or the product of (intentional/unointentional) cherry-picking . 

I have also attempted to provide my thought on the questions you listed in case it is useful. 

*What resources or upcoming meetings are a good opportunity for others to learn more about the topic? 

The IACM has now introduced a new conference for mechanistic machine learning and digital twins. The first one is at San Diego last year -- https://mmldt.eng.ucsd.edu/home. There will be a second one next year. 

For education resource, thanks to the support of NSF, Professor JS Chen and I have offered a course on the very basic machine learning in mechanics. The videos, lectures, slides, Jupyter notebooks are all free to download. 

https://mmldtshortcourse.weebly.com/lecture-notes.html

There are other colleagues from computer science and in mechanics community that posted great materials. For instance, the Livermore DDPS seminar:

https://data-science.llnl.gov/latest/news/virtual-seminar-series-explores-data-driven-physical-simulations

*For those skeptical about the utility of ML for problems in mechanics in particular (e.g., https://arxiv.org/abs/2112.12054), what would impress you? Can you design a dataset, problem statement, or benchmark challenge problem where ML based predictions would be impactful? 

I believe it is easy to overgeneralize both ways.  There are definitely hypes as well as pessimism extrapolated from small samples of evidence or personal experience. 

There have already been success stories, for instance, in protein folding.  It seems like the difficulty is not to demonstrate some success stories here and there, but establishing universally accepted metrics where different models/approaches/paradigms can be compared and building trust among the modelers/users/stakeholders. 

In the field of constitutive models, we have made a small attempt to build trust by using AI to expose the potential weakness of a given model using reinforcement learning (see below). The idea is to introduce an adversarial agent to explore the loading path and use reinforcement learning to determine the types of loading in which the model tends to perform poorly. Then, this information can be used for re-training such that the weakness can be (potentially) addressed. 

https://www.sciencedirect.com/science/article/pii/S004578252030699X?dgcid=rss_sd_all

I think this can potentially help improving the transparency of the model and avoid cherry-picking via third-party validation.  However, I think having the community to use the same set of benchmark data  (like the Sandia challenge) is probably a better way to move forward. 

*For everyone, what types of benchmark datasets would you like to see in the future? What should new benchmark datasets and associated challenge problems contain? 

I think the dataset you provided is great. I would like to see high-quality data that go beyond elasticity, for example those that involves fracture, damage, twinning, plasticity. Data that involves inverse design (see Kumar, Tan, Zheng and Kochmann 2020) https://www.nature.com/articles/s41524-020-0341-6 for instead and those of interesting microstructures are also great. 

 

*Curating datasets is time, labor, and resource intensive (e.g., see FAIR guidelines https://www.go-fair.org/fair-principles/, https://sites.bu.edu/lejeunelab/files/2022/04/Lejeune_Data_Management_Plan.pdf) — should limited resources (i.e., time, money, storage space) be allocated to these endeavors?

Yes. I think it is necessary. 

*What is the most useful way for mechanical data to be formatted? What necessary metadata should accompany each dataset? 

For practical reasons, data stored in table format is easy to use and share. 

*When does it make sense to curate and preserve data, and when is it unnecessary (e.g., a single FEA simulation can yield GB of data)? 

Whether to preserve the data depends on the opportunity cost and how important of it for the workflow. However, I think in most cases, it is also necessary for the trained model to be preserved as well such that it can be validated in the future if needed. 

*Do you see a role for benchmark datasets in mechanics education? For example, would benchmark datasets be a good resource for a first-year graduate student interested in research at the mechanics/ML interface? What should benchmark datasets for education contain? 

Absolutely. The difficulty is that generating data by itself is very mechanical and the first-year student could be overwhelmed by coursework as well as learning how to do research. 

 

*Do you have a publicly available dataset that we can add to this informal list of curated mechanical data (https://elejeune11.github.io/)? If so, I would love to include it! 

We posted some of our data and codes in our research group webpage and also in Mendeley. 

https://www.poromechanics.org/software--data.html 

 

elejeune's picture

Thank you Steve for your very comprehensive and thoughtful post! Now we have crossed the threshold where there is more information in the comment section than in the original blog entry :) In response to some of your points: 

1. Thank you for sharing the information on future MMLDT conferences — I attended MMLDT-CSET 2021 virtually last fall, and it was an excellent opportunity to learn more about the field! I am also thrilled to see that the notes from the short course are free to download — that is a very valuable resource. Sharing the Livermore DDPS seminar recordings also reminded me that recordings from the 2020 “Machine Learning in Science and Engineering Mechanical Track” that you and Krishna organized are also available on YouTube: https://www.youtube.com/channel/UCCiwSYhLPtUU3schrt4xviA 

2. Your point about hype vs. pessimism is well stated, and I think the article that you shared “A non-cooperative meta-modeling game for automated third-party calibrating, validating and falsifying constitutive laws with parallelized adversarial attacks” really highlights the importance of challenging our modeling frameworks — both for ML and non-ML based models. I highly recommend that everyone check it out! 

3. I agree — it would be great to see future benchmark datasets focused on multiple types of non-linear mechanical behavior and on challenging microstructures. I think that more access to these types of quite complex data would help advance the development of “mechanics specific” ML methods if/when ML methods that have worked well on simpler problems in mechanics (e.g., Mechanical MNSIT) fail. 

4. Thank you for raising the point about storing trained ML models as well! In addition to being useful for future validation, it is also possible that trained ML models could be useful for transfer learning, although in many cases with mechanical data this may not be straightforward. 

5. Finally, thanks for sharing the link to your lab’s software + data! Just now, I have added the collection of discrete element traction-separation data https://data.mendeley.com/datasets/n5v7hyny8n/1 (manuscript: https://doi.org/10.1016/j.cma.2018.11.026) to the informal list! 

Subscribe to Comments for "​​Journal Club for May 2022: Machine Learning in Mechanics: curating datasets and defining challenge problems"

Recent comments

More comments

Syndicate

Subscribe to Syndicate