Quality by Design for Clinical Trials

Vatché Bartekian
President, Vantage BioTrials, Inc.

Abstract: Quality by design for clinical trials comprises an independent entity responsible for quality standards and an integrated system where each person is accountable for quality. This article explores myths about quality and provides a general overview of the principles and philosophy of quality by design. Quality issues normally encountered at clinical research sites and contract research organizations as well as practical ways to build quality into a research program in order to prevent issues are highlighted. The Plan-Do-Check-Act cycle and its tie-in with risk-based monitoring are described.

Myths about Quality

There are many myths about quality in clinical trials, which will be unmasked in this article.

Quality Myth #1:
Auditors are the only ones qualified to implement quality systems and processes.

Quality Myth #2:
Maintaining a quality system is impossible with so many variables in clinical research.

Quality Myth #3:
The cost of establishing, maintaining, and re-evaluating quality is usually very high.

Quality Myth #4:
Quality is a department, a function, a specific role, or someone else’s job.

There are only two options, to make progress or to make excuses. Aristotle said, “Quality is not an act. It is a habit.” Not only are these words to live by, but the author also considers these words to work by.

Defining Quality

Different definitions of quality exist. Webster’s Dictionary defines quality as, “A high level of value or excellence.” This definition, however, does not state who attributes the value to quality.

The International Standards Organization, in ISO 8402, defines quality as, “The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs…” This definition is very different than the one before; however, it does not state whose needs are being satisfied.

Peter Drucker, a well-known educator and author said, “Quality in a product or service is not what the supplier puts in. It is what the customer gets out and is willing to pay for.”

The goal of clinical research is to find new treatments for patients. At the end of the day, however, clinical research is a business that many other businesses count on, such as sponsors and contract research organizations (CROs). The phrase “willing to pay for” is very important in Drucker’s definition, as there is a cost factor in achieving quality.

Quality is also about expectations. Drucker’s definition focuses on meeting or surpassing the customer’s expectations.

Quality Versus Operations – Can They Live in Harmony?

The first step in implementing quality by design in clinical trials is to make quality a principal or theme within the organization, whether it is a clinical research site, a CRO, or a sponsor. Quality should never just be a department within an organization. Yet, it often remains challenging to bring quality management and operations management together to increase overall quality metrics.

Quality management looks at deviation management; audits, inspections, and performance; approvals of products and procedures and quality control; and having quality advocates such as QA auditors and quality managers. Operations management looks at strategy and competitive position; staffing, personnel, and human resources; revenue; cost control; and production and logistics management. Quality management and operations management can cooperate by both being involved with the risks entailed in all activities within the organization.

Clinical trials involve risks related to the study protocol, data, scientific review, and standard operating procedures (SOPs). Study protocol risks can be assessed at study setup, prior to protocol approval, and at site initiation. With regards to clinical data, critical quality metrics for the data can be assessed with a focus on Good Clinical Practice (GCP) principles. Quality management and operations management should consider data to be more than numbers and consider how GCP principles impact data. In scientific review, risks are assessed on a scientific and medical level with a focus on data quality and completeness, including efficacy and safety results. For SOPs, quality management and operations management should assess the systems and processes involved in study development, management, and data review.

Quality by Design Principles and Philosophies

Whether or not quality should be an independent entity within an organization is a major issue. On the one hand, it can be argued that quality management should be an independent entity in order to prevent potential bias and provide objectivity. On the other hand, however, it can be equally valid to argue that quality management should be an integral part of all operations in order to be a constant reminder to set high standards.

Quality by design provides a solution to this conundrum. It states:

“An organization should have an independent entity to measure/review quality standards, along with an integrated system to continuously verify, analyze, correct and prevent issues from arising.”

Both correcting and preventing issues are part of quality by design.

GCPs and clinical trials have existed for decades, yet major variability in study conduct still exists. It is necessary to build quality into clinical trials, but equally important to learn how to measure quality through metrics in order to continuously improve on past performance.

Components of a quality philosophy include proactive rather than reactive analysis and identification of risk factors (Table 1). A quality philosophy also requires applying operational excellence, Lean 6-Sigma techniques, and using innovative quality risk management systems and tools to detect, prioritize, manage, and resolve risks.

There are many vendors providing a variety of software for quality risk management systems, including huge vendors providing products appropriate for large pharmaceutical companies, and CROs and others that offer systems for smaller companies. There is no one perfect system because the best system depends on the needs and culture of the organization that is using the system. It is important to vet a quality risk management system before choosing it.

Identifying best practices that will work for the organization to spearhead internal process optimization is also part of a quality philosophy, along with assessing and diagnosing risk and compliance issues early on. Attending conferences and learning from peers is one way to identify best practices. Analyzing the big picture of the protocol and study operations early enables organizations to be proactive instead of reactive, and it ensures timely implementation of Corrective and Preventive Actions (CAPA).

Organizations must also coach, motivate, and develop winning staff through effective team building and communication. This is often overlooked. Knowledge is not valuable unless it is shared. When a subject matter expert shares knowledge and serves as a cheerleader, the idea or process can move forward. People must become mentors or trainers and spearhead programs internally in order to bring other staff members onboard and convince them that a new idea or process is good and should be implemented.

Size and the experience of clinical research sites and organizations are often considered quality variability factors. Size, however, really does not seem to matter. No correlation exists between the number of quality issues and the size of a site or an organization. Evidence suggests that experienced sites and organizations have better quality, yet they still often receive major audit findings or FDA 483 forms.

Most Common Causes of Bad Quality

Key reasons for poor quality in clinical trials include inadequate staff training on GCPs and the protocol, mostly due to a sudden increase or decrease in resources (Table 2). There is a great deal of change within organizations. Say that an experienced study coordinator comes back from maternity leave in the middle of a study. Appropriate training must be provided.

Poor (or lack of) management supervision or quality control of task completion during the study is another key reason for poor quality. Implementing an oversight plan is part of quality by design. When managing sub-contractors, oversight plans are a must.

Lack of protocol clarity also leads to a poor understanding of what is required from all parties involved in a study. This sounds obvious; however, investigators and study coordinators often ask dozens of questions at investigator meetings and site initiation visits. Sponsors should seek advice on the protocol from investigators and perhaps even from patients. They can work with patient advocacy groups to obtain feedback from the patient’s perspective. Many patient advocacy groups welcome this, and they seek out pharmaceutical companies to work together to design more robust and realistic protocols. This also ties in with the notion of being a more “patient-centric” organization, which is always welcomed in our industry.

Another key reason for poor quality is a lack of quality control over collection and recording of study data. The author has been to certain clinical research sites where the research team is very experienced; however, team members do not do double verification of the data entered. They feel that the monitor will verify the data. This creates a great deal of work for team members, the monitor, and the sponsor. At other sites, one person spends most of his/her time on quality control verification. It is important to double-check the data that are being entered, that the proper documents are in the trial master file, and so forth. Having one person dedicated to doing this saves everyone many headaches later.

Implementing Quality by Design

Ways to build quality into clinical research programs include implementing standard processes/procedures (SOPs) and effective training. Implementing SOPs is obvious; however, often SOPs are not followed. Effective GCP training and refresher training is another way to build quality into clinical research. Many clinical research professionals think that reading something and documenting that they read it in a log means that they have been trained. This, however, does not mean that the training was effective. It is necessary to ensure that clinical research professionals understand the content of the training, through, for example, quizzes or practical exercises.

Building quality into clinical research programs also requires a continuous assessment of risk factors, defining clear roles and responsibilities, and having effective management oversight and accountability of the study. In assessing risk factors, the words “continuous assessment” are key. Risk must not only be assessed at the beginning of the clinical trial, but they should be analyzed moving forward throughout the entire project’s life cycle.

Adequate study-specific training is crucial in building quality into clinical research programs. Clinical research sites, sponsors, and CROs should perform mock research participant visits, walk-through study procedures, and so forth. Going through the study procedures step by step enables sponsors and CROs to determine whether they are asking too much of the research participants and site staff, and whether sites and research participants are being fairly reimbursed.

The main objectives of quality by design are patient safety and data integrity. These objectives are achieved through compliance with SOPs and regulations, robust processes, consistency across all processes, and transparency. Transparency is subtler. Errors do occur (to err is human after all) and it is important to be transparent about what happened and how it will be fixed.

Quality by design requires continuous improvement and often uses the Plan-Do-Check-Act (PDCA) cycle (Table 3). After setting standards and analyzing what could go wrong, the team goes through the PDCA cycle and repeats it. The interval for repetition is determined based on the design and needs of each clinical trial. Using the PDCA cycle guarantees an improvement in quality for clinical research sites, sponsors, and CROs.

Step 1 of the PDCA cycle, “plan,” is establishing the objectives and processes necessary to deliver quality results in accordance with the expected goals. Establishing goals creates a clear path to the desired improvement. When possible, planning should start on a small scale in order to test possible effects. Thus, it is not necessary to have a large system in place initially. Start by identifying and focusing on one risk factor.

“Do” is Step 2 of the PDCA cycle. This involves implementing the plan, executing the process, and most importantly, collecting the data. Many times, people forget to collect the data. Without collecting the data, it is not possible to move forward. Many tools are available out there for collecting and analyzing data.

Once the data have been collected, the third step of the PDCA cycle is to “check” them. Checking involves studying the actual results and comparing them against the goals to ascertain any differences. Look for deviations in implementation in the plan and ensure that the plan is appropriate and complete to enable the execution. Charting data can make it much easier to see trends over several PDCA cycles and to convert the collected data into valuable information. This information is necessary for the next step.

“Act” is Step 4 of the PDCA cycle. Once the information is available, request corrective actions on significant differences between the actual and planned results. Analyze the differences to determine their root causes. Determine where to apply changes that will improve the process.

The PDCA cycle is then repeated at the specified intervals. While the “A” in PDCA formally stands for “act,” the author prefers referring to the “A”” as “adjust.” This conveys an understanding that Step 4 really focuses on course correction or adjusting the path forward.

A simple example illustrates the use of the PDCA cycle. The issue that has been identified is that participants in a Phase 1 study are not correctly signing/dating their informed consent forms. There are 50 healthy volunteers in the study. The “plan” asks questions about:

• Who is authorized to consent subjects
• Whether there is a pattern in the incorrect informed consent forms
• How the staff is trained
• Whether other factors are involved.

Once the plan and goal are established, “do” might involve developing a re-training program and performing re-training of staff. “Check” could be directly observing staff performing the consenting process and conducting an internal audit of the next 20 informed consent forms before the participants leave the clinic.

If the information is still missing, “act” involves having the participants complete the informed consent forms properly, retraining the staff, or authorizing another staff member to consent incoming participants. Any other factors that may be involved, such as overburdened or distracted staff, must also be corrected. The cycle is then repeated.

Benefits of Quality by Design

Early detection, seeing the big picture, and gaining deeper insights into processes are benefits of quality by design. Through early detection of problems, faster corrections can be made, costs can be avoided, and images can be protected. All organizations must protect their images. Seeing the big picture enables organizations to comprehensively detect issues, quantify and prioritize risks, and focus the allocation of resources.

Gaining deeper insights into processes enables detection of systematic issues, identification of trends and patterns, and process improvements. Quality by design emphasizes trends and patterns. Identifying trends and patterns helps organizations determine where to focus in order to fix problems.

A Price Waterhouse Cooper study published in 2013 showed the potential for risk-based monitoring to save 15% to 20% in study portfolio costs compared to traditional monitoring. These were real cost savings for a real study. The cost of pre-trial submission and regulatory work, report production, data analysis, and safety did not change. Study activities such as investigator setup, data processing and management, site management, project management, and site monitoring and auditing changed substantially. The study showed a 25% cost savings for site monitoring and auditing and a 20% cost savings for project management and data processing and management. Other cost savings ranged between 5% and 10%. Across a study portfolio, the cost savings would reach millions of dollars.

Planning and start-up activities actually cost 2% more. The Price Waterhouse Cooper study proved that implementing quality by design in the start-up phase of the clinical trial requires more time and resources to determine the risks and develop a risk management plan; however, this early investment avoids potentially bigger costs toward the middle and end of the study.

Conclusion

Table 4 summarizes building quality by design into clinical trials. Each person is accountable for the quality of the study that he/she is working on. Quality is not the responsibility of a specific department or the quality assurance auditor. It is important to learn the customer’s expectations early on. It is also important to understand that the customers in a clinical trial are CROs, sponsors, clinical research sites, and research participants, not just sponsors themselves.

Creating a systematic process to build in quality makes the task more achievable. Spending the time at the beginning of the clinical trial process to do this will make everything else in the clinical trial much easier. Ignoring quality costs much more than addressing it continuously. Learn from others and identify best practices.

Increase quality by conducting risk analysis early on and throughout the project. Communicate clearly and often with the project team and the sponsor. Often communication plans are drafted and approved; however, they are not implemented. The communication plan outlines what to do when something happens and how to escalate a problem. It also lists the people who are accountable for specific tasks. It should be followed.

The myths described at the beginning of this article have now been exposed. Quality is not a department, position, or title. It is not a cost center or a final inspection. Quality should not be misused as a buzzword. Company websites often use terms such as “best quality.” If they use this term, they should prove that they provide the best quality. Quality is not something in someone else’s job description. It is a state of being. Quality has no finish line. It is a continuous process that must be reviewed on an ongoing basis in order to make improvements.null

TABLE 1
Components of a Quality Philosophy

  • Perform proactive (rather than reactive) analysis and identification of risk factors.
  • Apply operational excellence and Lean 6-Sigma techniques.
  • Detect, prioritize, manage, and resolve risks by using innovative quality risk management systems and analytical tools.
  • Identify best practices to spearhead internal process optimization.
  • Assess and diagnose risk and compliance issues early on to ensure timely implementation of Corrective and Preventive Actions .
  • Coach, motivate, and develop winning staff through effective team building and communication.

TABLE 2
Key Reasons for Poor Quality

  • Inadequate staff training on GCPs and the protocol, mostly due to sudden increase or decrease in resources
  • Poor (or lack of) management supervision or quality control of task completion during the study
  • Lack of protocol clarity leading to poor understanding of what is required
  • Lack of quality control over collection and recording of study data   

TABLE 3
The Plan-Do-Check-Act Cycle

  • Plan:
    • Establish the objectives and processes necessary to deliver quality results in accordance with the expected goals.
  • Do:
    • Implement the plan.
    • Execute the process.
    • Collect data.
  • Check:
    • Study the results and compare against goals,
    • Look for deviations in implementation from the plan and appropriateness and completeness of the plan to enable the execution,
    • Gather information is for the next step,
  • Act:
    • Request corrective actions on significant differences between actual and planned results.
    • Analyze the differences to determine their root causes.
    • Determine where to apply changes that will improve the process.
    • Repeat the cycle.

TABLE 4
Quality by Design in Clinical Trials

  • Each person is accountable for the quality of a study.
  • Learn the customer’s expectations early.
  • Creating a systematic process to build in quality makes the task more achievable.
  • Ignoring quality costs much more than addressing it continuously.
  • Learn from others, especially best practices.
  • Conducting risk analysis early on and throughout the project increases quality.
  • Communicate clearly and often.



4 thoughts on “Quality by Design for Clinical Trials”

  1. The current pandemic has an impact on the way we live and work. It has an impact on every aspect of what we do and no industry is exempt. The challenge faced by many Clinical Research organisation is the continuity of their work, which in large part depends on the participation of the subjects and to navigate it while complying with the regulations and ensuring the safety of staff and subjects. To help solve some of the problems that many are looking at technology, to see if it can help. The complexity of the clinical trial is continually changing. New technologies and innovation have changed the way we look at the complexity of clinical trials.

  2. Clinical trials are studies performed for research in people who have an objective at evaluating a surgical, medical, or behavioral intervention. Clinical Trials are the primary method of researchers to determine whether a new treatment or drug, or diet is safe and effective in human beings. Clinical research requires trials conducted to collect information regarding the efficiency and safety of a new drug, device, or treatment. These are tested on certain individuals, and this process is termed clinical trial recruitment. The main aim of associate recruitment is to raise awareness about clinical research courses and to encourage the enrolment of patients. Clinical trials form the major portion of clinical research.

  3. Awesome content!! I found this article to be very helpful and informative. Especially for me cause I was looking for this information for a long time and it seems that this article has served my purpose, Thanks once again. Keep sharing awesome content like these!!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.