...
 
Commits (2)
......@@ -109,6 +109,11 @@
%\usepackage[resetlabels]{multibib} % restart bibliography numbering at each chapter
% defines the \FloatBarrier command that makes sure that there are no imagagees after acknowledgements and references
\usepackage[section]{placeins}
% CHAPTER AUTHORS DEFINITION
\usepackage{enumitem, hyperref}
\newif\ifnewauthor
......@@ -176,13 +181,15 @@
\makeatother
% Rename Headline 'Bibliography' to 'References'
\renewcommand{\bibname}{References}
\begin{document}
% Rename Headline 'Bibliography' to 'References'
\renewcommand{\bibname}{References}
\pagenumbering{Roman}
\include{01-frontmatter/titlepage}
\include{01-frontmatter/copyright}
......
......@@ -10,17 +10,17 @@
\section*{Abstract}
In this chapter we briefly describe the Git-based infrastructure that has been implemented as a result of the Conquaire project to support analytical reproducibility. The infrastructure implemented relies on principles of continuos integration as used in software engineering projects.
Importantly, we rely on a distributed version control system (DVCS) to store computational artifacts that are key to reproducing the analytical results of an experiment. Using a DVCS has the benefit that artifacts can be versioned and each version can be uniquely addressed via a revision number. The DVCS implementation we rely on is Git, extended by Gitlab as a web interface and collaboration platform.
In this chapter we briefly describe the Git-based infrastructure that has been implemented as a result of the Conquaire project to support analytical reproducibility. The infrastructure implemented relies on principles of continuous integration as used in software engineering projects.
Importantly, we rely on a distributed version control system (DVCS) to store computational artifacts that are key to reproducing the analytical results of an experiment. Using a DVCS has the benefit that artifacts can be versioned and each version can be uniquely addressed via a revision number. The DVCS implementation we rely on is Git, extended by GitLab as a web interface and collaboration platform.
The heart of the infrastructure implemented in the Conquaire project is the so called \emph{Conquaire server}.
We assume that each research project deposits relevant data and code in a Git repository. In the background, a GitLab CI Runner on the Conquaire server is triggered by Git push events on the local GitLab server and executes a number of tests on the data and runs the code or script to reproduce a particular result. By this, we ensure that results can be reproduced independently of the original researchers on a separate machine.
In this chapter we briefly describe the infrastructure implemented and how tests are automatically executed when updates are committed to the Git repository.
\section{Introduction}
Principles of continuos integration have long been applied in software engineering to increase the quality of software artifacts and to prevent issues and failures due do integration of code developed in a distributed fashion by multiple developers in large software engineering projects. The heart of any continuos integration setup is typically a so-called \emph{integration server}\index{integration server} that runs a number of tests once updates of the software are committed and pushed, and possibly rejects the committed changes if they do not pass a number of tests. In continuos integration, software developers are encouraged to submit smaller changes in regular intervals to prevent errors and the well known \emph{`integration hell'}.
Principles of continuous integration have long been applied in software engineering to increase the quality of software artifacts and to prevent issues and failures due do integration of code developed in a distributed fashion by multiple developers in large software engineering projects. The heart of any continuous integration setup is typically a so-called \emph{integration server}\index{integration server} that runs a number of tests once updates of the software are committed and pushed, and possibly rejects the committed changes if they do not pass a number of tests. In continuous integration, software developers are encouraged to submit smaller changes in regular intervals to prevent errors and the well known \emph{`integration hell'}.
Inspired by continuos integration principles, in Conquaire we have attempted to transfer these principles from the domain of software engineering into the domain of research data management. The starting point for any continuos integration is the availability of a repository into which data and code can be committed. Thus, a central part of the Conquaire architecture is a Distributed Version Control System (DVCS) that allows researchers to deposit their artifacts into a central repository. An important advantage of such a repository is that data and code can be versioned and each version can be uniquely referenced by a specific revision number. This allows to pinpoint and reference the exact version of code and data that was used to obtain a certain result, a central element of reproducibility.
Inspired by continuous integration principles, in Conquaire we have attempted to transfer these principles from the domain of software engineering into the domain of research data management. The starting point for any continuous integration is the availability of a repository into which data and code can be committed. Thus, a central part of the Conquaire architecture is a Distributed Version Control System (DVCS) that allows researchers to deposit their artifacts into a central repository. An important advantage of such a repository is that data and code can be versioned and each version can be uniquely referenced by a specific revision number. This allows to pinpoint and reference the exact version of code and data that was used to obtain a certain result, a central element of reproducibility.
Within Conquaire, we selected Git as a DVCS and GitLab as a web interface and collaboration platform to implement a university-wide repository allowing researchers to store their digital and computational research artifacts, code and data in particular.
A key component of the Conquaire architecture is the so called \emph{Conquaire server}, which in Conquaire acts as a continuous integration server. Upon a new commit, the a GitLab CI Runner on the Conquaire server executes a number of predefined tests on code and data and runs code or scripts on data with the goal of reproducing a specific result. A central goal of Conquaire is to support the reproduction of a certain result independently on a separate machine that is out of the direct control of the original researchers.
......@@ -145,7 +145,7 @@ our design decisions.
\subsection{Overview} \label{overview}
A part of the Conquaire project was the development of automated data quality tests.
The quality checks are integrated into the \emph{Gitlab} platform from the University of Bielefeld.
The quality checks are integrated into the \emph{GitLab} platform from the University of Bielefeld.
The checks are written in \emph{Python 3.6} and use the \emph{lxml} package\footnote{\url{https://lxml.de/}} for parsing XML files as the only external requirement.
The pipeline of the quality check is shown in Figure~\ref{Fig-1: qc_pipeline} below. All steps are described in sections below.
\begin{figure}[!h]
......@@ -156,7 +156,7 @@ The pipeline of the quality check is shown in Figure~\ref{Fig-1: qc_pipeline} be
\protect
\end{figure}
By adding a preconfigured YAML file (in this case: \emph{.gitlab-ci.yml}) to a repository on the Gitlab instance, the checks are automatically executed via a continuous integration runner on the Gitlab server. \\
By adding a preconfigured YAML file (in this case: \emph{.gitlab-ci.yml}) to a repository on the GitLab instance, the checks are automatically executed via a continuous integration runner on the GitLab server. \\
The runner creates a docker container. As the docker image we use the python:3.6-alpine image because it is lightweight and only contains an installed version of Python 3.6. In addition to that, we install the \emph{lxml} package and a \emph{SMTP}\footnote{\url{https://wiki.debian.org/sSMTP}} instance to notify the user about the results from a check.
The user is informed via email about the result of applying the test. The mail contains information about the repository and a URL to a HTML site containing the detailed feedback which can be rendered by any browser. The mail also shows the user the overall test result which is displayed as a badge icon. The same icon is displayed in PUB if the user decides to create a data publication.
......@@ -193,10 +193,10 @@ quality-check:
- dockerexec
\end{lstlisting}
The whole pipeline is executed in a docker container and makes use of continuous integration variables provided by Gitlab. They are automatically filled with the information from the users Gitlab profile.
The whole pipeline is executed in a docker container and makes use of continuous integration variables provided by GitLab. They are automatically filled with the information from the users GitLab profile.
\subsection{Quality checks}
The Conquaire Quality Check pipeline involves a variety of tests that are automatically performed on the Git repository. Each time a commit occurs, the Gitlab CI runner calls our pipeline, and several scripts are executed to guarantee that the provided data is in the best possible state. The three main checks that are implemented are the FAIR check, the CSV check, and the XML check. The pipeline is designed to be very modular and flexible to make it as easy as possible to extend it with further checks, i.e., for additional file types.
The Conquaire Quality Check pipeline involves a variety of tests that are automatically performed on the Git repository. Each time a commit occurs, the GitLab CI runner calls our pipeline, and several scripts are executed to guarantee that the provided data is in the best possible state. The three main checks that are implemented are the FAIR check, the CSV check, and the XML check. The pipeline is designed to be very modular and flexible to make it as easy as possible to extend it with further checks, i.e., for additional file types.
\medskip
Every check begins with searching the repository and generating a list of every file with the specific type using the bash \texttt{find} command. For each file that was found, the corresponding test script is called to perform the actual checks and generate a log file with errors and warnings that were observed. The details of the three specific checks are described below. In the end, an overall feedback file is created, showing the results of the checks with links to the log files, making it possible to look into the data and correct it if necessary. The contributor is informed about the results of the pipeline via email.
......@@ -205,7 +205,7 @@ Every check begins with searching the repository and generating a list of every
In our adaptive implementation of the FAIR metrics\footnote{\url{http://fairmetrics.org/}}, we check if the three necessary files exist in the repository: the AUTHORS, LICENSE, and README files.\\
The files have to be placed in the root directory of the repository to fulfill the test condition. The files have to have either no extension, plain text (\texttt{.txt}) or markdown (\texttt{.md}). \\
We suggest to save the files as markdown files.
The markdown file type is used as a standard in Gitlab and many other websites because it has an easy to learn syntax and can be displayed in a web browser.
The markdown file type is used as a standard in GitLab and many other websites because it has an easy to learn syntax and can be displayed in a web browser.
\medskip
The AUTHORS file should contain a list of all the contributors and their emails for the possibility to contact them. The LICENSE file should describe how the data can be further used and distributed by other researchers, either by declaring one of the common licenses or providing their own. The README file should contain every other information that is related to the data and necessary or helpful to understand the research that was done, e.g., a description of the data or the experiment to obtain it.
......@@ -263,7 +263,7 @@ A link to this report is sent to the user committing the data for inspection of
Building on principles of gamification and to create incentives for committing ready-to-use-data, the Conquaire systems assigns badges to the data corresponding to whether they passed the tests or not and visualizes these badges in the reports generated and optionally on a PUB page where the data has been published.
During the Conquaire project, we have run a number of Git workshops with all case study partners, confirming our hypothesis that the subset of Git commands that is needed to commit data into the repository can be easily learned by our target population.
On the basis of our experience, we can definitely recommend Git, Gitlab and our architecture for continuous integration to implement an institutional infrastructure for hosting data and checking their quality as a basis to ensure reproduciblity of research results.
On the basis of our experience, we can definitely recommend Git, GitLab and our architecture for continuous integration to implement an institutional infrastructure for hosting data and checking their quality as a basis to ensure reproducibility of research results.
%\bibliographystyle{plain}
......
\chapter{Conclusion}
The DFG-funded Conquaire project has been concerned with investigating the feasibility of reproducing the analytical phase of research in experimental sciences. We have conducted eight case studies in various areas such as biology, linguistics, psychology, robotics, economics and chemistry as a basis to understand obstacles and best practices towards ensuring reproduciblity of scientific results.
The DFG-funded Conquaire project has been concerned with investigating the feasibility of reproducing the analytical phase of research in experimental sciences. We have conducted eight case studies in various areas such as biology, linguistics, psychology, robotics, economics and chemistry as a basis to understand obstacles and best practices towards ensuring reproducibility of scientific results.
The reproduction of analyses still involves substantial effort. Originally, we had set ourselves the goal to invest a full working week (40 hours) into the reproduction of each of these case studies. In many cases, the time needed to reproduce a result has exceeded this amount by a factor of three. The reason is that, in many cases, while data and scripts were available, the documentation was not sufficient to reproduce the analyses without step-by-step guidance of the authors of the original publication that we set out to reproduce.
In addition to the effort devoted to the reproduction itself, the Conquaire project has performed a number of workshops with all the researches from the eight use cases to introduce them to the goals of the project, to introduce Git, etc.
As a conclusion, we can say that the success rate for reproduction was very high. We were able to reproduce the results within all case studies. Yet, the level of reproducibility was not the same for all project. According to the taxonomy of levels of reproducibility introduction in chapter \ref{conquaire_book_intro}, we have on clear case of full analytical reproducibility and three further project that reached the category of full analytical reproducibility by the end of the project after recoding analytical workflows using open and free programming languages. Four case studies have the status of \emph{at least} limited reproducibility as the reproduction of their work (still) involves obtaining third-party commercial licenses for tool. It requires a minimal further investment to bring these cases into the level of full analytical reproducibility.
This is a clear success in our view, clearly showing that analytical reproducibility is feasible.
The main obstacles for analytical reproducibility found were i) the lack of documentation and thus reliance on guidance by the original authors, ii) the reliance on some manual steps in the analytical workflow (e.g. clicking on a GUI) , iii) the reliance on non-open and commercial software, and iv) lack of information about which particular version of software and/or data was used to generate a specific result.
An institutional policy and infrastructure can alleviate most of the problems mentioned above. Our experience shows that using a distributed version control system is a best practice to be followed and a basic step towards reproducibility. Our experience shows that scientists in any field can quickly learn to work with Git, in particular if GUIs such as GitLab are provided. Most of the scientists involved in case studies in Conquaire had no issues in uploading their data to a Git repository.
Our experience also shows that scientists are deeply motivated to make their results reproducible, even if this leads to a level of exposure that might lead to errors being discovered. In some cases we discovered minor errors in plots, scripts etc. and the involved scientists were more than happy to correct these minor issues. The exposure and independent validation brings benefits that are generally appreciated. This is indeed an important conclusion from Conquaire. While at the beginning of the project we were sceptic how open scientists would be willing to make their research artifacts available and support reproduction, we are more than convinced that there is a strong culture within science of being as open as possible to ensure external scrutiny or validation of scientific results.
Our experience has been positive thus and we would like to encourage research organizations world-wide in setting up policies encouraging their researchers to make their results analytically reproducible. On the basis of the results of Conquaire, Bielefeld University is working towards the establishment of policies in this respect.
We would like to end this book with a number of clear recommendations to research institutions wanting to support their scientists in making their results reproducible:
\begin{itemize}
......@@ -20,4 +27,5 @@ We would like to end this book with a number of clear recommendations to researc
\item \textbf{Open software:} We clearly recommend to set up policies that encourage researchers to rely on open, free and non-commercial software to facilitate reproduction of results on independent machines without the need to install commercial software and pay high license fees.
\item Metadata: Organizations should train and support researchers in creating high-quality metadata for their data and also train them in selecting and specifying under which licenses their data can be used. Consulting on data exploitation and use while taking into account privacy aspects is crucial. Bielefeld university has created a center for research data management with the mission of consulting and training researchers on such dimensions.
\end{itemize}
However, the most important lesson learned is that analytical reproducibility should not be considered as an afterthought and delayed to the end of a research project. Analytical reproducibility is easy to achieve if one designs experiments and software environments from the start with the goal to make analytical workflows executable on any server by a third party. This minimizes efforts needed as workflows are not disrupted in the middle of a project and minimizes the opportunity to post-modify data and results, thus creating transparency. Applying continuous integration principles from the start and taking into account data quality and publishing data and scripts early in the research process as well as specifying tests that monitor data quality and run analytical workflows independently of the researchers carrying out the research as well as publishing results continuously and transparently in some repository is an effective way of fostering analytical reproducibility.
\ No newline at end of file
......@@ -44,7 +44,7 @@ Accordingly, the main objective of that study was to relate inter-species differ
The overall data workflow used in this project is summarized in the chart shown in Fig. \ref{fig:fig2-workflow} (left column). There were three processing episodes: (i) data acquisition, (ii) manual editing and annotation, and (iii) secondary processing. The coloured boxes illustrate the procedure for recording the different types of data and how it was ultimately processed to reconstruct body and leg kinematics as displayed in Fig. 3 in the paper of Theunissen et al. \cite{Theunissen_EtAl_2015}. The colours of the boxes indicate the software used for a given step in the data processing pipeline (yellow: \textit{Vicon Nexus}; green: \textit{PixeLINK Capture}; blue: \textit{MATLAB}). The boxes and connecting arrows are labelled with the data file types produced, the relative file paths to the corresponding subdirectories, and the names of custom-written MATLAB (MathWorks, Natick, MA, USA) scripts.
\begin{figure}[ht]
\begin{figure}[]
\centering
\includegraphics[width=11cm,keepaspectratio]{images/fig2-Workflow.png}
\caption{\textbf{Research data acquisition and processing pipeline.} For raw data acquisition, whole body motions were recorded with a marker-based motion capture system (Vicon) and an additional digital video camera. Furthermore, the anatomy of the animal, along with the marker positions on different body segments were recorded with a microscope camera. In a first step of manual editing and annotation, marker trajectories of selected episodes were labelled and, potentially, connected in case of recording gaps. This step resulted in a \textit{.c3d}-file, a file format described in section \ref{c3dServerIO}. The body pictures were used to generate a body model containing, for example, segment lengths and information about marker position in a body-centred coordinate system. The model is stored in a MATLAB \textit{.mat}-file. Finally, the kinematic reconstruction was achieved in MATLAB by combining marker trajectories with the body documentation. The resulting processed data, i.e., joint angle time courses, gait pattern, and velocity, were saved as another MATLAB file.}
......@@ -85,7 +85,7 @@ Accordingly, the main objective of that study was to relate inter-species differ
\begin{figure}[h]
\begin{figure}[]
\centering
\includegraphics{./images/fig3-MotionCaptureBodyKinematics.jpg}
\caption{\textbf{A marker-based motion capture and whole-body kinematics calculations.} \textbf{A:} Insects were labelled with reflective markers. \textbf{B:} For kinematic analysis, the body was modelled by a branched kinematic chain. The main body chain (left) consists of the three thorax segments (Root, T2, T1) and the head. Six side chains (right) model the legs, with the segments coxa, femur and tibia (cox, fem, tib; only right legs are shown, labelled R1 to R3). All rotation axes (DoF) are indicated (3 for the root segment, 2 for thorax/head segments, and 5 per leg). DoF are denoted according to the subsequent segment and the axis of the local coordinate system around which the rotation is executed. Leg DoF are: cox.x, cox.y, cox.z (labelled for R2 in right panel), fem.y and tib.y (labeled for R1 in right panel). [Fig. 1 A, B of \citep{Theunissen_Duerr_2013}]}
......@@ -165,7 +165,7 @@ As a result of our reproduction experiment we could reproduce the walking and cl
Figure \ref{fig:compare_duerr} shows on the left the original panel from the paper published by Theunissen et al. \cite{Theunissen_EtAl_2015} for \textit{C. morosus}. On the right, our reproduction of the same trial is depicted. As the figure shows, asides from the rendering of the obstacle and the colouring, we could successfully reproduce the plots from the original paper.
\begin{figure}[ht]
\begin{figure}[]
\centering
\includegraphics[width=12cm]{../ch2-BiologyDuerr/images/fig5-compare.png}
\caption{\textbf{Representative trial of unrestrained walking and climbing behaviour of \textit{C. morosus} as one of the three species investigated in the original paper published by Theunissen et al. \cite{Theunissen_EtAl_2015} (Figure 3).}
......@@ -185,12 +185,14 @@ Figure \ref{fig:compare_duerr} shows on the left the original panel from the pa
We have described a reproducibility case study in the field of biology. We have in particular attempted to represent the main results of a study in whole-body movement analysis of three species of stick insects. The main objective of the study was to relate inter-species differences in kinematics to differences in overall morphology, including features such as leg-to-body-length ratio, that were not an obvious result of phylogenetic or ecological divergence. We have shown that we could successfully reproduce a main figure of the paper \emph{``Comparative whole-body kinematics of closely related insect species with different body morphology''} by Theunissen et al. \cite{Theunissen_EtAl_2015}. We classify this case as one of \emph{limited analytical reproducibility}. While we could reproduce the whole-body movements for a number of experimental runs that the authors provided in a GIT repository, this has only been possible by direct guidance of the authors. Further, the reproduction relies on use of commercial software, in particular MATLAB as well as the C3Dserver running on Windows only.
\FloatBarrier
\section*{Acknowledgements}
We would like to thank Florian Paul Schmidt for uploading the files to the \textit{biological-cybernetics} repo in the Gitlab \textit{Conquaire} group. We would like to thank Lukas Biermann and Fabian Herrmann (Student Assistants in Conquaire) for helping with the reproduction of the analyses in MATLAB.
\bibliographystyle{plain}
We would like to thank Florian Paul Schmidt for uploading the files to the \textit{biological-cybernetics} repo in the Gitlab \textit{Conquaire} group. We would like to thank Lukas Biermann and Fabian Herrmann (Student Assistants in Conquaire) for helping with the reproduction of the analyses in MATLAB.
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch2-BiologyDuerr}
}
......
......@@ -338,6 +338,7 @@ In future work, the potential and benefits of using virtualization in combinatio
%A detailed description of the work was presented in .
\FloatBarrier
%\bibliographystyle{plain}
\bibliographystyle{unsrt}
%\bibliographystyle{alpha}
......
......@@ -256,13 +256,14 @@ The data has been uploaded to the DFG FOR1525 project website (https://www.ice-n
%aimed at reproducing the analytical workflow that lead to the results published in the paper \emph{`BINARY: an optical freezing array for assessing temperature and time dependence of heterogeneous ice nucleation'} by Budke and Koop \cite{Budke2015}. The central diagram of this work showing the relation between the number of active sites of ice nucleation in dependence of temperature could be successfully reproduced by reimplementing the original analytical workflow in OriginPro via a Python script. As we did not exactly reproduce the original workflow, we have thus a case of limited analytical reproducibility. As a result of the project, both the derived data and the Python script described in this chapter are available for further re-use and validation of the original results.
%
%S-7
\FloatBarrier
\section*{Acknowledgments} \label{Ack}
%S-7
We thank Carsten Budke for providing the data and technical discussions during the computational reproducibility process.
\bibliographystyle{plain}
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch4-ChemistryKoop}
}
......
......@@ -321,11 +321,13 @@ Firstly, if researchers store their simulation data on an ongoing basis during a
Secondly, if pre-generated simulation data is available from a published paper from the original authors, then the FLAViz toolbox could be directly applied to this dataset to reproduce the plots of the published paper. These can then be used to check the validity of the claims made by the original authors in their paper.
It is in these two important ways that toolboxes such as FLAViz can be regarded as helping us to ensure the analytical reproducibility of research data.
%S-*7
\FloatBarrier
\section*{Acknowledgements}
We would like to thank Krishna Devkota for implementing the FLAViz library and Fabian Hermann for documentation and bug fixing.
\bibliographystyle{plain}
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch5-EconomicsHoog}
}
......
......@@ -311,12 +311,13 @@ The analytical pipeline that was used to generate results for publication was de
%S-7
\FloatBarrier
\section*{Acknowledgments}
We would like to acknowledge the support of Lukas Biermann and Fabian Herrmann for helping with implementation of the scripts and data analysis.
\bibliographystyle{plain}
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch6-LinguisticsRohlfing}
}
......
......@@ -542,12 +542,12 @@ The demonstration code for the Deep Disfluency library worked out of the box, en
The research project is already very much aligned with FAIR data principles as it adopts open software practices and makes large parts of the original experiments easily accessible.
Overall, this case corresponds to a case of limited reproducibility as the results could be partially reproduced for the offline settings, albeit not exactly.
\bibliographystyle{plain}
\FloatBarrier
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch7-LinguisticsSchlangen}
}
% Add Bibliography to ToC
\addcontentsline{toc}{section}{Bibliography}
......@@ -316,7 +316,8 @@ Inspite of all data being available and the results being in principle reproduci
%S-6
This chapter has described a Analytical Reproducibility case study in the area of neuro-cognitive psychology. In particular, we have described our effort to reproduce the main results of the article by Foerster and Schneider: \emph{`Expectation violations in sensorimotor sequences: Shifting from LTM-based attentional selection to visual search'} \cite{foerster_schneider_2015b}. The main result of the article mentioned above was the finding that expectation violations in a well-learned sensorimotor sequence in humans caused a regression from LTM-based attentional selection to visual search. The authors of the original publication (also co-authors of this article) provided the Conquaire project with all primary data and all scripts and spreadsheets used to reproduce the results. While we were successful in reproducing the results, we classify this use case as one of \emph{limited analytical reproducibility}. The reason for this is that some parts of the analytical pipeline rely on proprietary and commercial tools such as Matlab or SPSS that can not easily be replaced by open and free tools. Further, the lack of documentation of the pipeline requires interaction with the original authors to reproduce the pipeline faithfully. Both limitations could be easily overcome if further efforts are invested.
%S-*7
\FloatBarrier
\section*{Acknowledgements}
%S-*7
We thank Lukas Biermann and Cord Wiljes for assistance with the reproduction of the analyses.
......@@ -328,7 +329,8 @@ We thank Lukas Biermann and Cord Wiljes for assistance with the reproduction of
%\end{verbatim}
\bibliographystyle{plain}
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch8-PsychologySchneider}
}
......
......@@ -306,13 +306,16 @@ This chapter has shown that it is possible to reproduce a robotic experiment at
%% References with bibTeX database:
\bibliographystyle{plain}
\FloatBarrier
\bibliographystyle{unsrt}
{\raggedright % group bib left align
\bibliography{ch9-TechnologyWachsmuth}
}
% Add Bibliography to ToC
\addcontentsline{toc}{section}{Bibliography}
%% Authors are advised to submit their bibtex database files. They are
%% requested to list a bibtex style file in the manuscript if they do
%% not want to use model1-num-names.bst.
......