“Every 483 observation has a training component.” Truer words have never been spoken—I only wish it was me who said them. An FDA inspector I heard at a conference almost 15 years ago made this statement, and it forever changed how I look at 483 observations. I’ll admit up front that I have a bias towards training—after all, it’s what I do. Others don’t necessarily share that bias, being that often training isn’t done well, isn’t timely or effective, isn’t always meaningful. I’ve heard the arguments and can even agree with some of them. But let’s set that aside for now and look at what the inspector meant.
Sometimes It’s Obvious
Sometimes, the link to training in the 483 observation is obvious because some element of the training system or process appears in the citing, such as:
“The training program lacked a system to identify training needs for each employee, and therefore, it was not possible to determine if every employee was trained on their specific duties prior to performing them.” (February 2012)
“Your firm failed to establish a program to qualify the operators’ ability to gown in an aseptic manner. At the time of the inspection, our investigators observed aseptic filling operations performed by operators who had failed to undergo gowning qualification.” (April 2014)
“You did not train your contract employees in CGMP or in job-specific procedures. Your contract employees conducted critical CGMP operations for your finished drug products such as visual inspection of filled capsules, 100% verification of sealed bottles, final label quality inspection, outsert pasting, and final packing in boxes.” (May 2014)
But what if “training” doesn’t appear in every observation? How do we find that component in every citing?
Sometimes It’s Not So Obvious…
First, let’s broaden our definition to include what the regulators will consider “training.” At a minimum, all of the elements below will be considered part of a firm’s training program:
- Identification of training needs: The job tasks to be performed by specific positions have been identified and assessed to identify what training is required to enable people to perform them correctly.
- Provision of training: Training is provided (both on-the-job procedural training and ongoing GMP training), documented, and includes the use of qualified trainers and appropriate training content.
- Effectiveness of the training: Assessment processes are in place that identify whether the trainees were able to meet the training objectives.
- Management ownership: Area management must define the training required for their area(s), ensuring completion of training requirements prior to assigning qualified individuals to perform tasks.
So let’s look at a 483 observation where training isn’t clearly identified:
“Input to and output from the computer and related systems of formulas are not checked for accuracy. Specifically, the software program used in the Quality Control Laboratory has not been validated to generate results from the electronic data generated during testing. Additionally, the analytical formula used to calculate the result of moisture in raw material and finished product has not been verified or validated.” (July 2011)
How do we get “training” out of that?
Let’s analyze it from the top: The computer system and related formulas were not checked for accuracy, and the software program wasn’t validated. Why not? Maybe the program wasn’t validated at all, or the validation that was done simply didn’t include the check for accuracy. Again, why not? It would appear that those implementing the system didn’t know the requirements for validation and use of electronic systems.
And simple as that, we’ve identified a training component: Lack of knowledge of requirements and expectations for validation and use of electronic systems.
When viewed from this standpoint, the observation yields questions about the qualification of the people who put the program into use and those who reviewed its implementation. If they didn’t know the system needed validation or appropriate testing wasn’t in the validation—and none of the reviewers flagged it—their qualifications and their ability to perform the task can be questioned:
- Were they not trained on the requirements for electronic and computer systems, from 21 CFR 211 Subpart D and 21 CFR 11, prior to performing the task?
- Were they not qualified to write test protocols, because of not knowing what to include?
- How did reviewers let something through that didn’t meet requirements and expectations? Were they qualified to perform the review?
The same set of questions would apply to the lack of validation of the analytical formula/method and the requirements associated with that element of the observation as well.
Let’s try another example:
“Procedures to prevent contamination of equipment or product by substances that may have an adverse effect on product quality have not been adequately established. Specifically, the cleaning efficiency test is inadequate. The study found mold activity on coupons simulating equipment and surfaces in the manufacturing area following cleaning. The report states that the cause of the mold was ”elevated levels of mold found throughout the building”. However, mold was not used as a challenge organism during the study. Therefore, there is no evidence to support whether the current cleaning agents used in the facility are effective against the difference species of mold found in the facility. In addition, no acceptance criteria were determined prior to conducting the study. Since the study was conducted, mold has been identified on multiple occasions in Class 10,000 ad 100,000 areas.” (April 2013)
This observation refers to the appropriateness of a study’s design, which would lead to questions about the involved individual’s experience with designing experiments, their understanding of the purpose of the work, and the implications of items found in routine monitoring.
And one more example:
“Routine calibration, inspection, and checking of automatic, mechanical and electronic equipment is not performed according to a written program designed to assure proper performances. Specifically, Packaging Line () was moved to its current location in 4/2011, however, it was reassembled using different components than those previously qualified as part of the packaging line. Your firm failed to re-qualify this packaging line used to package all of your drug product.” (February 2012)
This example highlights a lack of a fundamental, big picture understanding of validation and change control. All individuals working in a GMP environment should be exposed to these concepts and expected to understand what types of systems are validated and the impact of change on a validated system. The individuals involved in this situation apparently did not know that changing out of parts could alter the operation of the system. Or, that dismantling and reassembling the system—especially with different parts—would effectively invalidate the installation, operational and performance qualification that had been previously performed, requiring re-qualification of the system.
What Can We Learn From This?
When reading 483 observations with this perspective, a lot of gaps in knowledge quickly become obvious. People often miss requirements because of lack of exposure to them or not understanding them. Other times, people don’t have important foundational knowledge, because no one championed the importance of everyone understanding the big picture. And still other things happen because people who don’t have the appropriate qualifications are assigned to perform tasks.
Each of these comes down to individuals not knowing that they’re missing something or that there is something they should know prior to performing the task.
The following set of questions can apply to many non-training specific observations in almost any 483:
- Did those responsible for the task have training on the regulatory requirements and appropriate guidance documents that captured expectations for the task they were performing?
- Were they trained on other critical elements and skills (e.g., design of experiments, etc.) needed or did they have the appropriate background to perform the task appropriately?
- Were these assigned as training requirements for the people involved?
- Did they successfully complete the training requirements prior to being assigned to perform the task in question? How do we know, or what evidence do we have, of it?
- Were the activities captured in the position requirements for the individuals who performed them? If not, their training curricula may not have included the content needed to prepare them for the tasks.
- Did they understand the big picture purpose of the activity, and were they able to use that knowledge to make appropriate decisions in executing the task?
“Doing more with less” has led to the average pharmaceutical worker having less time for training and cutting back of training requirements (and training itself) to a bare minimum, resulting in a bevy of workers without appropriate levels of knowledge, as reflected in 483 observations.
It falls to certain individuals in our organizations—leadership, area management, subject matter experts, and trainers—to ensure that workers are appropriately prepared for tasks they’ll be asked to perform. Do this by defining appropriate training requirements, required levels of knowledge, and skills for the tasks and then holding people to meeting those expectations—no exceptions!
ABOUT THE AUTHOR
Joanna Gallant is an experienced, solutions-driven Quality professional with over 20 years of technical and operational experience within pharmaceutical, biotechnology and medical device manufacturing environments. Over her career she has provided regulatory, technical, skill and management development training support to all Operations functions as well as IT, R&D, Customer Service and senior management.
Joanna has held positions in quality assurance, laboratory and training roles. She has established and led multifunctional and global project teams, and worked both as an individual contributor and a manager, and can speak from various perspectives as a result. She is well versed in instructional systems design and possesses highly efficient analytical problem solving and root cause analysis skills, which she uses to identify problems and solutions for performance and process improvement, and to analyze training systems for gaps and training programs for effectiveness.