Each year Colorado State University hosts CSU Demo Day, a showcase celebrating innovation and entrepreneurship within the University. This year our lab manager and research associate, Aubrey Ibele, was one of the 106 chosen presenters for the showcase.

Below is her research poster and the script of her presentation to attendees.

Problem

  • Discovery Paradigm:
    • Large pool of candidates
    • Linear series of screens
    • Searching for a needle in the haystack
    • Unpredictable outcome
    • Focused on each step individually
  • Engineering Paradigm:
    • Design
    • Manufacturing
    • Predictive models

*Random exploration vs. rational design

  • Synthetic Biology: new way to interpret and reprogram living biological systems using engineering principles
    • Not an industry, a way of thinking
    • New way to do biological research
    • Rational design
    • Iterative Improvement
    • Model/data driven –requires high throughput
      • Data is larger, messier, more complex (different types)
    • Requires the utilization of industrial manufacturing researching techniques
  • Top 7 Issues with science (according to 270 polled scientists)
    • Academia has a huge money problem
    • Too many studies are poorly designed
    • Replicating results is crucial — and rare
    • Peer review is broken
    • Too much science is locked behind paywalls
    • Science is poorly communicated
    • Life as a young academic is incredibly stressful
  • Scientific Process Cycle (SPC) (also known as product development cycle)
    • Design
    • Build
    • Test
    • Learn
  • Friction in the Scientific Process
    • Manufacturability – we can now design DNA sequences we can’t make
    • Data Analysis – no point in collecting data you cannot analyze
    • Loss of samples
    • Poor experimental design
    • Process control
    • Assay development
    • Data management

*If we cannot close the gap, we are wasting time and money.

**Core elements of the scientific process are necessary, but due to friction within the cycle, they don’t fit together the way they should.

***If the core elements don’t fit together, the data doesn’t flow properly through the cycle

  • Leads to…
    • DOWNTIME (8 sources of waste)
    • Wasted time trying to troubleshoot experiments
    • Wasted money on supplies that were improperly utilized
    • Wasted time trying to optimize a poorly designed project
    • Over processing or under capturing data
    • Loss of data
    • Lack of repeatability

Solution

  • 3D
  • Supports the SPC
  • Fully integrated

Innovation

  • Elements of lean manufacturing
  • Work flow independent
  • Fully integrated system for data capture, processing,
  • Modular computational infrastructure
    • Plug ins
  • Value
    • Enhanced repeatability
    • Increased Productivity
    • Higher level of data integrity
    • Standardization of processes (data capture, experiments, tracking)
    • Optimization of time/resources
    • Less waste
    • Reduced human error
    • Quicker data analysis

Audience

  • Life science laboratories who are committed to scientific progress and increasing productivity without changing current capabilities.

What you need is a customized informatics infrastructure that saves will save time, money, and stress so you can take your science to the next dimension.

There is a massive problem in the life sciences community that no one is talking about. Every scientific project goes through the Project cycle. This is Design, Build, Test, Learn. Let’s take the production of a synthetic antibody as an example: First you design your project. You decide on sequences of your DNA and solidify an experimental workflow that you plan on using to test these sequences for antibody production. Then you build those DNA sequences and from there move on to testing them. From the testing, you generate complex data sets from the physical samples. From these datasets we learn about how the DNA functions and what sequences work the best. The problem is… there is a massive disconnect between the physical aspect of a scientific project and the digital aspects.  Most scientist go through the Project cycle in a linear fashion, failing to take into account how the different step of the cycle relate to each other.

As a result…

Assays and workflows aren’t properly documented and optimized, reducing repeatability.

Samples that are created aren’t labeled appropriately, their existence is often not recorded, and the information/data associated with those samples is rarely linked to it’s physical counterpart.

Also, complex data sets are captured and stored in various formats and locations.

Leading to wasted time and reagents when experiments aren’t performed correctly, a break down in data integrity, and it makes data retrieval and analysis difficult and time consuming.

Due to these points of friction, the core elements of the project cycle don’t fit together the way they should.

The solution to this issue, is to add another dimension to the project cycle, Informatics. This forces scientists to design points of information processing and retrieval into their scientific projects. Meaning, we must consider the data that needs to be captured in order to inform our hypothesis at each step of the cycle so that we don’t over capture or under capture data.

This also forces scientists to think about data differently.  Data is not just the final number that arises from the most important experiment.  Data also includes the specifics on how each sample was made, the materials that went into making each sample, alterations to protocols, and the physical origin of quantitative information.

In order to achieve this goal, we developed a modular informatics infrastructure that supports and compliments our experimental workflows to simplify data retrieval, processing and analysis. This infrastructure is a collection of various software and data management tools that help standardize the flow of information through the Project Cycle.  It is built in a modular manner, allowing users to swap out different tools and find what meet their needs best, making it incredibly flexible, customizable, and workflow independent.  Some of these tools include Electronic lab notebooks for easy recording of experiments that can be standardized and accessed anywhere, dashboards that provide visual representation of productivity metrics, databases for information storage and organization, and bioinformatics pipelines.

Using this tool, we save time, money, and resources by limiting trouble shooting, reducing human error through standardization, and time spent searching through unorganized data.

We enhance repeatability by efficiently capturing all necessary data concerning experimental work flows, sample creation, and data analysis.

And we increase productivity by freeing up our team’s time to work on higher level parts of the project instead of working through the details.

This leads to faster scientific discovery and innovation. Like this life saving antibody.

A special thank you to CSU Ventures for organizing Demo Day and to all the sponsors of 2019’s showcase.