uu.seUppsala University Publications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 15) Show all publications
Peters, K., Bradbury, J., Bergmann, S., Capuccini, M., Cascante, M., de Atauri, P., . . . Steinbeck, C. (2019). PhenoMeNal: Processing and analysis of metabolomics data in the cloud. GigaScience, 8(2)
Open this publication in new window or tab >>PhenoMeNal: Processing and analysis of metabolomics data in the cloud
Show others...
2019 (English)In: GigaScience, ISSN 2047-217X, E-ISSN 2047-217X, Vol. 8, no 2Article in journal (Refereed) Published
National Category
Bioinformatics and Systems Biology Pharmaceutical Sciences
Identifiers
urn:nbn:se:uu:diva-371635 (URN)10.1093/gigascience/giy149 (DOI)000462551600002 ()30535405 (PubMedID)
Funder
EU, Horizon 2020, EC654241
Available from: 2018-12-07 Created: 2018-12-21 Last updated: 2019-05-02Bibliographically approved
Lampa, S., Dahlö, M., Alvarsson, J. & Spjuth, O. (2019). SciPipe: A workflow library for agile development of complex and dynamic bioinformatics pipelines. GigaScience, 8(5), Article ID giz044.
Open this publication in new window or tab >>SciPipe: A workflow library for agile development of complex and dynamic bioinformatics pipelines
2019 (English)In: GigaScience, ISSN 2047-217X, E-ISSN 2047-217X, Vol. 8, no 5, article id giz044Article in journal (Refereed) Published
Abstract [en]

Background: The complex nature of biological data has driven the development of specialized software tools. Scientific workflow management systems simplify the assembly of such tools into pipelines, assist with job automation, and aid reproducibility of analyses. Many contemporary workflow tools are specialized or not designed for highly complex workflows, such as with nested loops, dynamic scheduling, and parametrization, which is common in, e.g., machine learning. Findings: SciPipe is a workflow programming library implemented in the programming language Go, for managing complex and dynamic pipelines in bioinformatics, cheminformatics, and other fields. SciPipe helps in particular with workflow constructs common in machine learning, such as extensive branching, parameter sweeps, and dynamic scheduling and parametrization of downstream tasks. SciPipe builds on flow-based programming principles to support agile development of workflows based on a library of self-contained, reusable components. It supports running subsets of workflows for improved iterative development and provides a data-centric audit logging feature that saves a full audit trace for every output file of a workflow, which can be converted to other formats such as HTML, TeX, and PDF on demand. The utility of SciPipe is demonstrated with a machine learning pipeline, a genomics, and a transcriptomics pipeline. Conclusions: SciPipe provides a solution for agile development of complex and dynamic pipelines, especially in machine learning, through a flexible application programming interface suitable for scientists used to programming or scripting.

Keywords
Scientific Workflow Management Systems, Workflow tools, Workflows, Pipelines, Reproducibility, Machine Learning, Flow-based Programming, Go, Golang
National Category
Bioinformatics (Computational Biology)
Research subject
Bioinformatics
Identifiers
urn:nbn:se:uu:diva-358347 (URN)10.1093/gigascience/giz044 (DOI)000474856100002 ()31029061 (PubMedID)
Funder
eSSENCE - An eScience CollaborationSwedish e‐Science Research CenterEU, Horizon 2020, 654241
Available from: 2018-08-27 Created: 2018-08-27 Last updated: 2019-08-13Bibliographically approved
Lampa, S., Dahlö, M., Alvarsson, J. & Spjuth, O. (2019). SciPipe-Turning Scientific Workflows into Computer Programs. Computing in science & engineering (Print), 21(3), 109-113
Open this publication in new window or tab >>SciPipe-Turning Scientific Workflows into Computer Programs
2019 (English)In: Computing in science & engineering (Print), ISSN 1521-9615, E-ISSN 1558-366X, Vol. 21, no 3, p. 109-113Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE Computer Society, 2019
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-384084 (URN)10.1109/MCSE.2019.2907814 (DOI)000466469900012 ()
Available from: 2019-06-18 Created: 2019-06-18 Last updated: 2019-06-18Bibliographically approved
Grüning, B. A., Lampa, S., Vaudel, M. & Blankenberg, D. (2019). Software engineering for scientific big data analysis. GigaScience, 8(5), Article ID giz054.
Open this publication in new window or tab >>Software engineering for scientific big data analysis
2019 (English)In: GigaScience, ISSN 2047-217X, E-ISSN 2047-217X, Vol. 8, no 5, article id giz054Article, review/survey (Refereed) Published
Abstract [en]

The increasing complexity of data and analysis methods has created an environment where scientists, who may not have formal training, are finding themselves playing the impromptu role of software engineer. While several resources are available for introducing scientists to the basics of programming, researchers have been left with little guidance on approaches needed to advance to the next level for the development of robust, large-scale data analysis tools that are amenable to integration into workflow management systems, tools, and frameworks. The integration into such workflow systems necessitates additional requirements on computational tools, such as adherence to standard conventions for robustness, data input, output, logging, and flow control. Here we provide a set of 10 guidelines to steer the creation of command-line computational tools that are usable, reliable, extensible, and in line with standards of modern coding practices.

Place, publisher, year, edition, pages
Oxford University Press, 2019
Keywords
software development, big data, workflow, standards, data analysis, coding, software engineering, scientific software, integration systems, computational tools
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-390339 (URN)10.1093/gigascience/giz054 (DOI)000474856100022 ()31121028 (PubMedID)
Funder
EU, Horizon 2020, 654241
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Lapins, M., Arvidsson, S., Lampa, S., Berg, A., Schaal, W., Alvarsson, J. & Spjuth, O. (2018). A confidence predictor for logD using conformal regression and a support-vector machine. Journal of Cheminformatics, 10(1), Article ID 17.
Open this publication in new window or tab >>A confidence predictor for logD using conformal regression and a support-vector machine
Show others...
2018 (English)In: Journal of Cheminformatics, ISSN 1758-2946, E-ISSN 1758-2946, Vol. 10, no 1, article id 17Article in journal (Refereed) Published
Abstract [en]

Lipophilicity is a major determinant of ADMET properties and overall suitability of drug candidates. We have developed large-scale models to predict water-octanol distribution coefficient (logD) for chemical compounds, aiding drug discovery projects. Using ACD/logD data for 1.6 million compounds from the ChEMBL database, models are created and evaluated by a support-vector machine with a linear kernel using conformal prediction methodology, outputting prediction intervals at a specified confidence level. The resulting model shows a predictive ability of [Formula: see text] and with the best performing nonconformity measure having median prediction interval of [Formula: see text] log units at 80% confidence and [Formula: see text] log units at 90% confidence. The model is available as an online service via an OpenAPI interface, a web page with a molecular editor, and we also publish predictive values at 90% confidence level for 91 M PubChem structures in RDF format for download and as an URI resolver service.

Keywords
Conformal prediction, LogD, Machine learning, QSAR, RDF, Support-vector machine
National Category
Bioinformatics (Computational Biology)
Research subject
Bioinformatics
Identifiers
urn:nbn:se:uu:diva-347779 (URN)10.1186/s13321-018-0271-1 (DOI)000429065900001 ()29616425 (PubMedID)
Funder
EU, Horizon 2020, 731075
Available from: 2018-04-06 Created: 2018-04-06 Last updated: 2018-08-28Bibliographically approved
Lampa, S., Alvarsson, J., Arvidsson Mc Shane, S., Berg, A., Ahlberg, E. & Spjuth, O. (2018). Predicting off-target binding profiles with confidence using Conformal Prediction. Frontiers in Pharmacology, 9, Article ID 1256.
Open this publication in new window or tab >>Predicting off-target binding profiles with confidence using Conformal Prediction
Show others...
2018 (English)In: Frontiers in Pharmacology, ISSN 1663-9812, E-ISSN 1663-9812, Vol. 9, article id 1256Article in journal (Refereed) Published
Abstract [en]

Ligand-based models can be used in drug discovery to obtain an early indication of potential off-target interactions that could be linked to adverse effects. Another application is to combine such models into a panel, allowing to compare and search for compounds with similar profiles. Most contemporary methods and implementations however lack valid measures of confidence in their predictions, and only providing point predictions. We here describe the use of conformal prediction for predicting off-target interactions with models trained on data from 31 targets in the ExCAPE dataset, selected for their utility in broad early hazard assessment. Chemicals were represented by the signature molecular descriptor and support vector machines were used as the underlying machine learning method. By using conformal prediction, the results from predictions come in the form of confidence p-values for each class. The full pre-processing and model training process is openly available as scientific workflows on GitHub, rendering it fully reproducible. We illustrate the usefulness of the methodology on a set of compounds extracted from DrugBank. The resulting models are published online and are available via a graphical web interface and an OpenAPI interface for programmatic access.

Keywords
target profiles, predictive modelling, conformal prediction, machine learning, off-target, adverse effects
National Category
Pharmacology and Toxicology
Research subject
Pharmacology
Identifiers
urn:nbn:se:uu:diva-357894 (URN)10.3389/fphar.2018.01256 (DOI)000449322200002 ()30459617 (PubMedID)
Funder
EU, Horizon 2020, 731075
Available from: 2018-08-21 Created: 2018-08-21 Last updated: 2019-01-15Bibliographically approved
Lampa, S. (2018). Reproducible Data Analysis in Drug Discovery with Scientific Workflows and the Semantic Web. (Doctoral dissertation). Uppsala: Acta Universitatis Upsaliensis
Open this publication in new window or tab >>Reproducible Data Analysis in Drug Discovery with Scientific Workflows and the Semantic Web
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The pharmaceutical industry is facing a research and development productivity crisis. At the same time we have access to more biological data than ever from recent advancements in high-throughput experimental methods. One suggested explanation for this apparent paradox has been that a crisis in reproducibility has affected also the reliability of datasets providing the basis for drug development. Advanced computing infrastructures can to some extent aid in this situation but also come with their own challenges, including increased technical debt and opaqueness from the many layers of technology required to perform computations and manage data. In this thesis, a number of approaches and methods for dealing with data and computations in early drug discovery in a reproducible way are developed. This has been done while striving for a high level of simplicity in their implementations, to improve understandability of the research done using them. Based on identified problems with existing tools, two workflow tools have been developed with the aim to make writing complex workflows particularly in predictive modelling more agile and flexible. One of the tools is based on the Luigi workflow framework, while the other is written from scratch in the Go language. We have applied these tools on predictive modelling problems in early drug discovery to create reproducible workflows for building predictive models, including for prediction of off-target binding in drug discovery. We have also developed a set of practical tools for working with linked data in a collaborative way, and publishing large-scale datasets in a semantic, machine-readable format on the web. These tools were applied on demonstrator use cases, and used for publishing large-scale chemical data. It is our hope that the developed tools and approaches will contribute towards practical, reproducible and understandable handling of data and computations in early drug discovery.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2018. p. 68
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Pharmacy, ISSN 1651-6192 ; 256
Keywords
Reproducibility, Scientific Workflow Management Systems, Workflows, Pipelines, Flow-based programming, Predictive modelling, Semantic Web, Linked Data, Semantic MediaWiki, MediaWiki, RDF, SPARQL, Golang, Reproducerbarhet, Arbetsflödeshanteringssystem, Flödesbaserad programmering, Prediktiv modellering, Semantiska webben, Länkade data, Go
National Category
Pharmacology and Toxicology Bioinformatics (Computational Biology)
Research subject
Bioinformatics; Pharmacology
Identifiers
urn:nbn:se:uu:diva-358353 (URN)978-91-513-0427-4 (ISBN)
Public defence
2018-09-28, Room B22, Biomedicinskt Centrum, Husargatan 3, Uppsala, 13:00 (English)
Opponent
Supervisors
Funder
EU, Horizon 2020, 654241Swedish e‐Science Research CentereSSENCE - An eScience Collaboration
Available from: 2018-09-04 Created: 2018-08-28 Last updated: 2018-09-10
Lampa, S., Willighagen, E., Kohonen, P., King, A., Vrandečić, D., Grafström, R. & Spjuth, O. (2017). RDFIO: extending Semantic MediaWiki for interoperable biomedical data management. Journal of Biomedical Semantics, 8, Article ID 35.
Open this publication in new window or tab >>RDFIO: extending Semantic MediaWiki for interoperable biomedical data management
Show others...
2017 (English)In: Journal of Biomedical Semantics, ISSN 2041-1480, E-ISSN 2041-1480, Vol. 8, article id 35Article in journal (Refereed) Published
Abstract [en]

BACKGROUND: Biological sciences are characterised not only by an increasing amount but also the extreme complexity of its data. This stresses the need for efficient ways of integrating these data in a coherent description of biological systems. In many cases, biological data needs organization before integration. This is not seldom a collaborative effort, and it is thus important that tools for data integration support a collaborative way of working. Wiki systems with support for structured semantic data authoring, such as Semantic MediaWiki, provide a powerful solution for collaborative editing of data combined with machine-readability, so that data can be handled in an automated fashion in any downstream analyses. Semantic MediaWiki lacks a built-in data import function though, which hinders efficient round-tripping of data between interoperable Semantic Web formats such as RDF and the internal wiki format.

RESULTS: To solve this deficiency, the RDFIO suite of tools is presented, which supports importing of RDF data into Semantic MediaWiki, with metadata needed to export it again in the same RDF format, or ontology. Additionally, the new functionality enables mash-ups of automated data imports combined with manually created data presentations. The application of the suite of tools is demonstrated by importing drug discovery related data about rare diseases from Orphanet and acid dissociation constants from Wikidata. The RDFIO suite of tools is freely available for download via pharmb.io/project/rdfio .

CONCLUSIONS: Through a set of biomedical demonstrators, it is demonstrated how the new functionality enables a number of usage scenarios where the interoperability of SMW and the wider Semantic Web is leveraged for biomedical data sets, to create an easy to use and flexible platform for exploring and working with biomedical data.

Keywords
MediaWiki, RDF, SPARQL, Semantic MediaWiki, Semantic Web, Wiki, Wikidata
National Category
Bioinformatics (Computational Biology)
Research subject
Bioinformatics
Identifiers
urn:nbn:se:uu:diva-329195 (URN)10.1186/s13326-017-0136-y (DOI)000409081000001 ()28870259 (PubMedID)
Funder
eSSENCE - An eScience CollaborationSwedish e‐Science Research CenterEU, FP7, Seventh Framework Programme
Available from: 2017-09-10 Created: 2017-09-10 Last updated: 2018-08-28Bibliographically approved
Ameur, A., Dahlberg, J., Olason, P., Vezzi, F., Karlsson, R., Martin, M., . . . Gyllensten, U. B. (2017). SweGen: a whole-genome data resource of genetic variability in a cross-section of the Swedish population. European Journal of Human Genetics, 25(11), 1253-1260
Open this publication in new window or tab >>SweGen: a whole-genome data resource of genetic variability in a cross-section of the Swedish population
Show others...
2017 (English)In: European Journal of Human Genetics, ISSN 1018-4813, E-ISSN 1476-5438, Vol. 25, no 11, p. 1253-1260Article in journal (Refereed) Published
Abstract [en]

Here we describe the SweGen data set, a comprehensive map of genetic variation in the Swedish population. These data represent a basic resource for clinical genetics laboratories as well as for sequencing-based association studies by providing information on genetic variant frequencies in a cohort that is well matched to national patient cohorts. To select samples for this study, we first examined the genetic structure of the Swedish population using high-density SNP-array data from a nation-wide cohort of over 10 000 Swedish-born individuals included in the Swedish Twin Registry. A total of 1000 individuals, reflecting a cross-section of the population and capturing the main genetic structure, were selected for whole-genome sequencing. Analysis pipelines were developed for automated alignment, variant calling and quality control of the sequencing data. This resulted in a genome-wide collection of aggregated variant frequencies in the Swedish population that we have made available to the scientific community through the website https://swefreq.nbis.se. A total of 29.2 million single-nucleotide variants and 3.8 million indels were detected in the 1000 samples, with 9.9 million of these variants not present in current databases. Each sample contributed with an average of 7199 individual-specific variants. In addition, an average of 8645 larger structural variants (SVs) were detected per individual, and we demonstrate that the population frequencies of these SVs can be used for efficient filtering analyses. Finally, our results show that the genetic diversity within Sweden is substantial compared with the diversity among continental European populations, underscoring the relevance of establishing a local reference data set.

Place, publisher, year, edition, pages
NATURE PUBLISHING GROUP, 2017
National Category
Medical and Health Sciences
Identifiers
urn:nbn:se:uu:diva-337314 (URN)10.1038/ejhg.2017.130 (DOI)000412823800012 ()28832569 (PubMedID)
Funder
Science for Life Laboratory - a national resource center for high-throughput molecular bioscienceKnut and Alice Wallenberg Foundation, 2014.0272Swedish Research CouncilSwedish National Infrastructure for Computing (SNIC), sens2016003EU, European Research Council, 282330
Available from: 2018-01-08 Created: 2018-01-08 Last updated: 2018-08-27Bibliographically approved
Alvarsson, J., Lampa, S., Schaal, W., Andersson, C., Wikberg, J. E. S. & Spjuth, O. (2016). Large-scale ligand-based predictive modelling using support vector machines. Journal of Cheminformatics, 8, Article ID 39.
Open this publication in new window or tab >>Large-scale ligand-based predictive modelling using support vector machines
Show others...
2016 (English)In: Journal of Cheminformatics, ISSN 1758-2946, E-ISSN 1758-2946, Vol. 8, article id 39Article in journal (Refereed) Published
Abstract [en]

The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

Keywords
Predictive modelling; Support vector machine; Bioclipse; Molecular signatures; QSAR
National Category
Pharmaceutical Sciences Bioinformatics (Computational Biology)
Research subject
Bioinformatics
Identifiers
urn:nbn:se:uu:diva-248959 (URN)10.1186/s13321-016-0151-5 (DOI)000381186100001 ()27516811 (PubMedID)
Funder
Swedish National Infrastructure for Computing (SNIC), b2013262 b2015001Science for Life Laboratory - a national resource center for high-throughput molecular bioscienceeSSENCE - An eScience Collaboration
Available from: 2015-04-09 Created: 2015-04-09 Last updated: 2018-08-28Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6740-9212

Search in DiVA

Show all publications