CRF Health recently held another very successful eCOA Forum, this time hosted in Amsterdam. These semi-annual meetings are a chance to get together with our clients and leading industry experts to discuss the latest developments in the field of eCOA (electronic Clinical Outcome Assessments). It is also very important for us to hear directly from clients about the real-world challenges they are facing and creative solutions they have developed to support the implementation of eCOA in clinical trials in their companies.
There were a number of presentations, workshops and hands on sessions during the two days of the Forum. However, some of the most interesting feedback was generated during four breakout sessions which sought to develop best practices in a number of important aspects of eCOA solutions, namely Study Design and Set Up; Scientific Best Practices; Training Best Practice; and Scaling eCOA for Company-Wide Use.
The Study Design and Set Up group highlighted 3 key tasks:
- Ensuring careful management of validated questionnaires being used in the study
- Developing clear protocol definitions and study design
- Initiating early vendor selection and project setup
This nicely complemented the group discussing Scientific Best Practices, where the take-home messages were all about early consideration of scientific issues relating to questionnaire choice and vendor selection.
The Training Best Practices session broke down the issues into training of the study team, site personnel and, of course, the subjects. The importance of having devices with final (or as near final as possible) software implementations was highlighted and the potential of building in scope for follow up training was also emphasized. With all training, standardizing the approach and relying less on personal skills of site staff, CRAs, etc. for example by using standard materials, training videos, etc., was also suggested as a key way of improving the quality of training delivered during studies.
Scaling eCOA for Company Wide Use was a group that generated much interest among our clients who are just starting to explore the possibilities of eCOA and are running into resistance among colleagues who are often more comfortable using paper. The suggestion for the top five things to focus on includes:
- Securing senior management’s support
- Overcoming myths and perceptions relating to eCOA
- Ensuring good risk management
- Education and communication throughout the company
- Supporting study teams
There was also a lot of stimulating discussion of the future of eCOA, with a near unanimous desire among our clients to ultimately move away from providing hardware to patients to a "bring your own device" model where patients can complete study questionnaires on their personal device. As we at CRF Health are exploring new eCOA technologies, it was great to hear the enthusiastic debate on the topic. This approach obviously raises a whole host of potential issues, some of which CRF Health hopes to discuss in more detail in future publications.
All in all it was a very successful and stimulating few days. It is important to occasionally leave the day-to-day bubble of the office and actually engage with clients in a more general setting, to understand the real issues they are having and the often novel solutions they have developed to overcome them. We hope to see some of you at our future events.
VP of Product & Service Management
July was a month of very mixed fortunes for Alzheimer’s disease (AD) compound development. On the positive side EnVivo Pharmaceuticals announced at the Vancouver AAIC event positive effects of their nicotinic alpha 7 compound EVP-6124 (see the slide deck presentation and Dr. Dana Hilt’s press presentation). Nutricia were also at AAIC to present the results of their investigation of Souvenaid, a medical food that has shown positive effects on memory in patients with early AD (see Dr. Philip Scheltens’ AAIC presentation).
Much less encouraging was the news from Pfizer and Janssen AI that Bapineuzumab has not shown positive effects in treating AD. These disappointing results for Bapineuzumab have reignited discussions about the time course of the disease and raised the question of whether intervening in already diagnosed AD is just too late. Consequently the focus of many of the discussions I had in Vancouver were about intervening in at the prodromal stage (secondary prevention), i.e. before patients are diagnosed with AD. Dr. Reisa Sperling described the planned ‘A4’ study and made the point that increases in amyloid are often seen as early as 15 years before the onset of AD (see her presentation).
Although the AAIC meeting is an AD event, in the restaurants and cafes around the convention centre there was much discussion of other neurodegenerative diseases, and particularly Parkinson’s disease (PD). The hallmarks of this disease are the cardinal signs that present as difficulties with movement, including tremor and bradykinesia (slowness of movement). Remedies for PD have tended to focus on the movement component of the disease and it is interesting to note that for many patients one of the most troubling aspects of PD is the cognitive problems they suffer. Dr. Alexander Tröster makes the point that "mild changes may be present as early as the time of diagnosis" and "at any one time about one quarter to one third have mild cognitive impairment (MCI) while another one quarter to one third have dementia" (see here). See also his very informative webcast on this topic.
Cognitive difficulties in PD were the focus of my PhD and I have maintained a strong interest in this area. I have also witnessed a marked recent increase in seeking to remedy these cognitive difficulties with pharmaceuticals. This is hardly a new idea; some years ago Rivastigmine was investigated for rescuing cognitive deficits in PD and showed some improvements on general tests of cognition, tests, reaction time and executive function (see here).
Recent experience of participating in clinical drug trials of new therapies for remedying the signs and symptoms of PD has reminded me just how ‘busy’ these protocols have become. The need to monitor behavioral, psychiatric, cognitive and the myriad other signs and symptoms of PD requires the inclusion of often as many as 20 different scales and measures, placing a significant burden on study teams. Many of the cognitive areas of interest, such as memory and executive function are often measured using traditional ‘paper-and-pencil’ tests, whilst cognitive skills such as attention and psychomotor speed are best indexed using computerised tests such as those employed in the above referenced Rivastigmine trial. Given the number of other assessments in a typical protocol it is tempting to employ computerised versions of the standard scales, such as the UPDRS, Zarit Caregiver Burden, MMSE, etc. Building on the other documented benefits of electronic data capture, this approach can greatly reduce the burden on study staff.
I’m sure that caregivers, patients and clinicians would welcome the development of drugs to remedy the movement difficulties associated with PD. A legacy of current medication is often the emergence of dyskinesia, uncontrolled athetoid and choreic movements. Drugs that remedy tremor and bradykinesia without this unfortunate legacy would doubtless be welcomed. It is tempting to suppose that patients free of the burden of tremor, dyskinesia and difficulties with initiating movement would be better able to focus on remembering things, concentrating and problem solving, so whilst new compounds might not have ‘direct’ effects on cognition, the control of movement difficulties might well be reflected as benefits to cognition.
It would be nice to see the development of drugs to rescue the cognitive consequences of PD. License extensions of drugs such as Rivastigmine are of course important, but new therapies targeted on the often distinctive cognitive deficits seen in patients with PD would be very welcome. Current EMA guidance notes for PD drug developers refer to the current guidance notes for AD and other dementias for advice regarding the assessment of cognition. However, specific assessments based on tests known to be sensitive to cognitive deficits in PD, with links to the pathological neural substrates of the disease, seem a more satisfying approach. One possible solution is to employ a similar assessment to that used in a recent trial of a gene therapy for PD. Interestingly, the concern in this trial was primarily cognitive safety, but the same measures would likely function just as well as measures of efficacy.
Rescuing cognitive impairments in a variety of neurological and psychiatric disorders is an area of clear unmet need. Current treatments for PD focus on the movement disorder component of the disease and the development of compounds designed to rescue the cognitive components of the disorder would surely be welcomed by patients, caregivers and healthcare professionals alike. Hopefully we can bring the same level of sophistication to cognition measurement and trial conduct as we do to the search for new treatments.
John Harrison, PhD
Scientific Consultant, CRF Health
There are many things to consider when developing a questionnaire – the concepts of interest, wording of questions, recall period, response options, a Likert scale versus a visual analog scale versus a numeric rating scale, etc. However, an element that is often overlooked is the actual layout of the questionnaire, i.e., how the questions and responses that the respondent will see and interact with are actually going to look on the screen or page. For sure, great importance is given (or should be given) to ensuring that the questionnaire is easy on the eye and intuitive to follow; but how do these decisions impact the perception of the user?
A paper from Arizona State University published earlier this year (abstract and summary) highlights the power of layout on how users respond to questions. Researchers presented participants with two lists of symptoms; one list was made up of real symptoms associated with a type of cancer, while the second list had symptoms for a fictional type of thyroid cancer. All participants were presented with the same list of symptoms; however, they differed in how they were laid out. Some participants were presented with lists in which “common” symptoms (i.e. those we all suffer from, such as fatigue, difficulty concentrating etc.) were clumped together, while other participants were presented lists in which these common symptoms were interspersed with more unique symptoms such as “lump in neck”, etc.
When participants were presented with the “clumped” lists they were more likely to diagnose themselves with the disease being described. The researchers surmised that this is due to the fact that participants got a “run” of positive hits, making it appear more meaningful compared to a mix of positive and negative hits. “[I]dentifying symptoms in “streaks” – sequences of consecutive items on a list that are either general or specific – prompted people to perceive higher disease risk than symptoms that were not identified in an uninterrupted series.”
Researchers also found that the length of the list of symptoms presented had an impact, with participants less likely to self-diagnose with a disease when positive hits were part of a longer list. It was concluded that the effect was “diluted” when positive hits were mixed in with a longer list of negatives.
That the layout of a questionnaire can have a statistical impact on the data captured is not a newly recognized phenomenon. Two papers, Christian & Dillman (2004) and Tourangeau et al. (2004), neatly summarise some of the effects that can be produced by simply altering how questions look. Some of the highlights of their findings include the following:
- Linear (horizontal) response choices affect respondent behavior. Respondents are more likely to choose answers from the top row in a multiple column format. Nonlinear layouts such as double and triple banking, seen in examples A below, are often used to save space. However patients are more likely to choose from the top line of the banked questions, when compared to the linear options seen in examples B below.
- Nominal scale questions should have evenly-spaced response options as visual midpoint plays a role in how participants answer. For example, participants are more likely to choose “Possible” and “Unlikely” in Example A compared to Example B below.
- “Non-substantive” options can also have an impact on the visual midpoint of questions. When a divider line or space is used to separate so-called non-substantive options from substantive responses the visual midpoint of the scale falls at the conceptual midpoint, as in Example A (“About the right amount”). However, when participants are presented non-substantive options simply as additional radio buttons (Example B) they are more likely to choose “Too little” as the visual midpoint of the scale has been skewed.
Consistency of responses to the same questions differ when multiple items are shown on a single screen, divided across several screens, or presented one at a time. Participants utilize a “near means related” heuristic and items on the same screen are more closely correlated than those spread across multiple screens.
“Left and top mean first”. The leftmost or top item in a list of items is considered to be the “first” in some conceptual sense and the remaining items follow from left to right or from top to bottom in some logical progression. Participants take longer to respond to questions that depart from this heuristic. Participants may also use this heuristic as a “cognitive shortcut” to answer questions.
We might feel we are being completely rational and providing answers based solely on our subjective truths when responding to questionnaires. However, we are all vulnerable to the ironically invisible effect of visual layout. How things look can affect us in ways we don’t even realize. In the world of outcomes research the vague, subjective nature of the slippery concepts we are trying to capture require us to reduce to the greatest extent possible the impact of external influences, and the layout of the questionnaire is one aspect that should not be forgotten.
This potential impact should be considered during the instrument development phase and any effects should be investigated when measuring the instruments psychometric properties using appropriate quantitative research methodologies. However, this is also a key consideration when migrating instruments from one modality to another (such as from paper to electronic) and potential changes to the layout of the questionnaire on the new modality should be carefully considered. CRF Health has migrated more than 130 eCOA instruments from paper to electronic and conduct ongoing research on these issues, ideally placing us to support and advise our customers in these matters.
Please feel free to share your comments or contact me via email: email@example.com
Manager Health Outcomes, CRF Health
I've started collecting information regarding the key barriers for eCOA (electronic Clinical Outcome Assessments) adoption from the perspective of new (or potential) users. In many cases we on the eCOA service provider side might not even be fully aware of all of them, perhaps because we have worked out solutions to most of the key concerns. But if we're not aware of the user's concerns, we might never get the opportunity to address them.
Quite often we end up working with expert users from the client side who see the value in using eCOA within their company and they're asking for our help to provide justification to the rest of their organization as well. I can imagine there are a wide variety of reasons and it's impossible to address all of them in a blog post. But I think this may be a good format to address some of the key recurring ones.
Barrier 1 - Perceived High Cost of eCOA
Naturally, companies need to understand whether it is worth investing in eCOA technology for their particular need. In order to do this they first need to understand the additional cost for introducing this technology. This step is fairly easy as eCOA costs are highly transparent, up-front and at least with CRF Health, fixed scope = fixed cost. There are pros and cons to this transparency in pricing; the eCOA price sticker is highly visible, but the paper one is not. It can be very difficult to estimate costs for paper studies, because most of the costs come from processing the paper into electronic format and cleaning it, and these costs are often 'hidden' behind the hours spent on it by study sites and data management. However, this is still something that can be measured and estimated to some degree of accuracy when effort is put into it.
The other aspect of ROI is the expected benefits of the technology. This can be very difficult to quantify in terms of money. eCOA technology can have a huge impact on the quality of the data, regulatory acceptance and the management of the study. There are examples from studies where patients reported more 'events' using eCOA than paper. In some cases, there have been 2-3 times more events reported with eCOA than with equivalent studies using paper. Many study designs ultimately require a certain number of 'events' to occur or data points to be collected in order to be able to demonstrate efficacy or safety. If the use of eCOA collects more of these events faster, does that mean that the study may get by with fewer patients? Perhaps so, if these factors are well understood and the use of eCOA is considered early on, in order to take this into account in the protocol design. Another factor that is based on evidence in studies is that the use of eCOA can help retain patients. This can be seen from decreased screening failure rates and rates of withdrawal of consent and in the high levels of compliance for data entry. Recruiting patients is expensive and time consuming and losing them in the middle of the study is even more expensive.
With eCOA the site staff can monitor the patients remotely and make sure they're compliant with the study medication and data entry schedules, providing a benefit for real-time management of patients between visits. Quite a few investigative products are highly dependent on the patients being able to closely follow the medication regimen, including concomitant medications. eCOA can help a great deal by guiding the patients as well as reporting any deviations to the sites immediately. But what is the impact of all this? I would expect it's far easier to demonstrate efficacy of a drug when it's used as instructed.
Then there are adaptive protocols, which are dependent on getting early results from the study and then using this information to drive decisions on how to proceed. Having trustworthy, clean data available in real time must be of great value, but how do you assign a dollar figure to that?
This is a topic I will continue working with and is probably important enough to warrant its own blog post later on ...
Barrier 2 - Can the Patients Use It?
This comes up a lot and the ever-increasing amount of consumer technology is not always helping. I'm somewhat of a technology expert, but I sometimes have difficulties navigating the menus of my TV. How do you then expect elderly patients who are very ill to be able to use a smartphone device to collect data?
I would split my answer into three parts:
Part 1: Consumer technology IS complicated; it is a very competitive environment with more features and options added all the time, which makes the underlying design principles completely different from those for designing an eCOA instrument. The CRF Health eCOA design principle is to create designs that really guide the user through what they need to do every time they log in. We don't expect them to remember anything from the training or user manual, the eCOA tool will walk them through it, every time they log in.
Part 2: eCOA instruments are created to be better fit for their purposes in clinical trials than your average consumer product. There is plenty of evidence to prove this point. All CRF Health's eCOA tools automatically collect compliance data from our studies. Some of these metrics are interesting: the elderly group has 92.5% compliance, well above the average of 90%. Very interestingly, they also have a relatively low standard deviation of 14.0, compared to 18.9 with the adult group. In fact, it is the adult group that is performing the worst, even though they are the group most used to adopting consumer technology. We have also done several usability studies and focus group sessions and an overwhelming majority of subjects have expressed their preference for electronic data collection over paper.
Part 3: Despite my personal beliefs and the data we have available, each study population and protocol is different. However, there are ways to confirm whether eCOA is a good fit for your study. CRF Health's technology allows us to create eCOA prototypes very rapidly and these can then be deployed to study sites very rapidly for a quick proof-of-concept kind of testing before committing to using the technology on a larger scale. We have done this several times with some tricky conditions such as Parkinson's Disease and Multiple Sclerosis, and I have been quite surprised by the good results we've achieved. I can really recommend this as an objective assessment of the technology in your specific setting. It's also a great way to get some of the key opinion leaders on board and get some early input from them.
Barrier 3 - Lack of Awareness
We interact most often with people who are well aware of eCOA and I've been under the impression that eCOA awareness is well dissipated into the organizations and that study teams are making informed decisions about what modality to use in their study. Apparently that is often not the case and this is clear to see especially when we are asked to attend 'eCOA awareness days' organized by sponsors. In these events, we get to present and do demos for study teams and anyone who's interested. A large portion of the audience is often only slightly aware of the technology, but not the full capabilities and how their particular studies could benefit from it. These sessions have been very useful for us and clearly for our customers as well, because usually after these sessions we start seeing more questions and requests for proposal from those study teams. If you're one of the eCOA champions at your company, I would encourage you to organize more of these kinds of sessions. We are more than happy to support you in this by providing materials, presentations, demos or whatever you need to promote awareness within your company. I would also encourage you to share this blog with your colleagues and refer them to our webinars and archives - there is a lot of useful information available!
Other Barriers - Maybe Something More Specific to You?
Well, I promised to give you a short list of the more common ones. I bet there are a whole host of different ones, perhaps more specific to your situation. You're in luck, because this is going to be one of the topics for CRF Health's next eCOA Forum (used to be called the User Group) in Amsterdam this September. In this interactive workshop, we will have some expert users available to discuss how they've solved some of these challenges and any of our new (or old) users can provide their issues and challenges for discussion facilitated by some of CRF Health's experts and more importantly, our expert users. If you'd like to participate, you can contact me directly at firstname.lastname@example.org. Hope to see you there!
Senior Director, Technical Support, CRF Health
March and April are traditionally two months that for me involve an unusual amount of business travel. This year has been no exception and in the past 6 weeks I have visited Istanbul, Paris, Vienna, Dallas, Washington, Philadelphia, Cape Town and at the time of writing I am in Tokyo with just Sydney and Melbourne to visit before returning home for Easter. The work has been varied and interesting, including advisory boards, user group presentations and a good number of investigator meetings. What is different about this year is that the indications under discussion have included depression, schizophrenia, ADHD, Parkinson’s disease, Alzheimer’s disease and a variety of other diseases, including hypertension, diabetes and fibromyalgia.
Of course the theme of these discussions and training sessions has been the measurement of cognitive change. What has been remarkable is that the same issues of, i) which cognitive domains (e.g. memory, attention, executive function, etc.) are impaired, ii) dealing with practice effects, iii) issues of rater training, etc. have occurred in the context of each indication. What is also remarkable is that the same or at least similar candidate tests have been proposed for use across all these indications. Sometimes the very same test has been proposed, so tests of executive function such as the Controlled Oral Word Association Test are near ubiquitously employed. Sometimes the specific test to be used varies. For example, episodic verbal memory (psychologists’ fancy name for remembering a list of words) is measured using the Hopkins Verbal Learning Test in patients with schizophrenia, the ADAS-cog Word Recall test in patients with Alzheimer’s disease and the Rey Auditory Verbal Learning Test in patients with depression. However, at root these tests are all measuring the same cognitive construct using a word list learning paradigm. Word list learning is also a component of many of the multi-domain composite tests such as the MMSE, MoCA and SCOPA-cog that get used to measure cognition in a variety of indications. Often the measures selected are traditional measures, so-called ‘paper-and pencil’ tests, borrowed from clinical psychology. More often these days these measures are augmented or replaced by computerized measures from commercial systems such as the CDR System, CogState, CANTAB, Cogtest and CNS Vital Signs. Experts typically have their own views on which is the best test of each cognitive domain, though are typically agreed on what the fundamental cognitive domains are. Characterising the domains of interest was helpfully provided by the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) initiative. The seven domains listed are typically impaired in patients suffering from Cognitive Impairment Associated with Schizophrenia (CIAS). A recent review from Milan et al has described how many of these domains are compromised in a number of psychiatric indications. Impaired performance in these same domains is also often seen in dementia and other neurological diseases.
The MATRICS group chose to borrow tests from clinical psychology to cover the specific domains of function. This approach whilst pragmatic as a legacy inherits some of the traditional challenges of measuring cognition. An alternative approach would have been to develop brand new tests, specifically designed to address the measurement of cognitive change and ideally computer-based. Computer assessment can, by delivering stimuli at the prescribed time and capturing data, lighten the burden of test administrators. Computer assessment allows also for the rapid transfer of trial data, which lends itself neatly to prompt inspection and audit, ideally automatically. These new measures could be designed to more obviously reflect the tasks actually encountered in everyday life. Back in the 1970s Ulric Neisser begged psychologists to develop measures of cognition that had what he called ecological validity i.e. tests that index cognition using paradigms encountered in real life. Developing a brand new set of tests could have addressed up front the issues faced when using measures in different linguistic and cultural settings. We could also employ sophisticated psychometric approaches such as Item Response Theory to develop tests that can be titrated to the abilities of the patients we test. These approaches don’t preclude the use of paper and pencil testing, but would benefit from computerised administration, as with computers only the items appropriate to the current patient would be presented. This would save time and, potentially, confusion. We could develop a selection of brief, easy to administer, sensitive ‘best of class’ measures. The challenge always is that there is precious little money to be made in developing new tests. Consequently we often borrow measures that were designed as simple, fairly rudimentary bedside measures of cognition and press them into service as efficacy measures in clinical drug trials.
Initiatives directed at improving our cognition assessment measures are often specific to individual indications, such as was the case with the MATRICS program. Having spent time in different places, with different groups, considering the same issues for different indications, I’m wondering whether we should pool our skills, experience and resources to develop a cognition assessment that would serve all our needs. A true, international consensus would lend credibility and authority to such an enterprise. The support of stakeholders such as academic and clinical experts, regulatory bodies, commercial test vendors and other interested groups would be important in helping to get this done.
Academic and clinical research groups are a good source of creative ideas and innovative solutions. However, the process of integrating computer based assessments into a format acceptable for clinical trial use is an expensive and exacting process. Perhaps one way to meet current need would be for partnerships between research groups and commercial entities, which could operate in collaboration to mutual benefit. An assessment system capable of being used in a variety of CNS indications could accommodate the needs of many stakeholders. Regulators could include reference to a system in notes for guidance, sponsors could employ standard versions in their trials and vendors could be licensed the rights to distribute and support the resulting measures. Ultimately patients, caregivers and society as a whole would benefit from knowing that the drugs for which marketing approval was being sought had been shown to be truly efficacious.
John Harrison, PhD
Scientific Consultant, CRF Health
I managed to find time last November to attend the second CNS Summit Meeting. It seems difficult to find time for these events, but I am always pleased to have the opportunity to attend. The theme for this event was development and innovation in CNS drug development, a topic close to my heart. Plus the Boca Raton Resort & Club is no bad place to hang out. We started with an excellent ‘Call to arms’ from Patrick Kennedy, an inspirational speaker (and if you closed your eyes you could hear the same rich speaking tones of his uncle John). His topic was ‘The Importance of Collaboration and Innovation in Developing New CNS Treatments’. This is just one of the drives for innovation, a keenness to establish collaborative arrangements in a pre-competitive space. The theme is one of sharing, especially of data collected for validation purposes. I’m convinced this is an idea whose time has come. Working in psychiatric and neurological drug development is a considerable challenge. Luca Santarelli, Sr. Vice President at Roche, has a magnificent slide on which he uses a Dantean metaphor comparing different therapeutic areas – with CNS development mapping to ‘hell’.
I get the opportunity to work in a number of CNS indications, but Alzheimer’s disease (AD) still occupies more than 50% of my professional time. AD seems to me the quintessential example of the challenging CNS disease. We need desperately to find a cure, or even some symptomatic relief, as the number of cases in many countries is set to treble over the next few years. Not too long ago we had a number of compounds in late stage development, most of which were focused on amyloid plaques, one of the hallmark pathologies of this dreadful disease. Things are not so rosy now, with lots of high profile failures in the interim, though we still have Bapineuzumab and Solanezumab to read out.
Remedying CNS disease usually means finding a compound for diseases for which the cause, or causes are seldom known. We have become adept at screening for disease and have often met with success in identifying the pathological markers and have had some success with developing biomarkers. We’ve also become very aspirational with respect to compound development, genetic screening, neuroimaging, etc., and have ploughed a good deal of cash into the enterprise and into the aforementioned collaborative initiatives. Yet there is still a good deal to do be done, especially with respect to the measurement instrumentation we employ.
For example , take cognition, a field of key interest for me. As is the case for many CNS indications, when we need a measure we tend to borrow from clinical psychology and end up employing tests that were rarely developed for repeated assessment of the kind we need to conduct to detect drug effects. A consequence of this approach is that we have employed measures such as the Mini-Mental States Examination (MMSE) for recruiting patients and the ADAS-cog for measuring efficacy. However neither test was designed for the purposes to which we have put them.
The inadequacies of the MMSE were recently very publicly highlighted in a BBC drama ‘Five Days’, which began with a caregiver tutoring her AD suffering mother on the content of the scale! The inadequacies of the ADAS-cog have also been evident for some time. This is not the place to rehearse the various issues attached to its use, but we might profitably consider a key issue, the presence of ‘ceiling’ effects. Here I am referring to tests on which study participants tend to perform perfectly and which are consequently incapable of showing improvement. Previous studies have reported that performance on nine of the 12 most popularly used subtests reaches ceiling in about 80% of all patients. The three tests that are not prone to this effect all assess memory. A popular comment in defence of the continued use of the ADAS-cog is that a genuinely efficacious drug would be capable of demonstrating improvement. This may or may not be true, but it still seems troubling that such an innovative, creative and progressive group of professionals such as CNS drug developers would be willing to make do with the status quo. Incidentally, the ADAS-cog can be a confusing instrument to administer and consequently recording scores can be a source of error. We could also bring a little profitable innovation by parsing early returns from site for any conspicuous errors. This could be an automatic process based on known data characteristics, such as the correlations between different subtests.
I began considering the content of this blog a couple of weeks ago when it was still appropriate to wish email correspondents a ‘Happy New Year’. New Year is a time to be resolute and it seems to me that this might be the time to start work on improving the quality of our measurement instruments so that when we do finally explain the causes and develop cures, we can characterize the effects of our drugs using first rate, best of class, validated tools.
John Harrison, PhD
Scientific Consultant, CRF Health
I note that the discussion regarding the future of 2G networks has been initiated by a few machine-to-machine providers who seek to use claims of an impending phase-out against their competitors deploying GSM-only equipment. There are currently no published plans or announcements from any wireless carrier regarding phase-out of their 2G networks, and this lack of any published plans creates an opportunity to present such claims.
There is naturally an on-going evolution of networks, and eventually any technology will become out-dated. To analyze the future of 2G, we should try to understand the drivers for wireless carriers in the development of their networks. Essentially, their decisions are based on the relevant regulative framework and business case. Wireless radio spectrum is typically licensed for a fixed term and for specific use, determining which wireless technology can be used to operate on the licensed frequency band. While it is difficult to predict how the license landscape will change over time in each country, the periods tend to be fairly long, and it is not reasonable for the regulator to make abrupt changes that would severely affect how the carriers can utilize their massive investments in network infrastructure. Carriers will not phase out a technology that is generating them a profit if they don't have to, and if it does not make business sense to them.
Let's look at why and when a network operator would want to phase out a given network technology. There are technical concerns regarding the use of different wireless technologies on the same frequency bands, and traditionally large chunks of the wireless radio spectrum have been dedicated to a given technology (like GSM, WCDMA etc). Also, as can easily be seen from the specifications of handsets, support for each technology/frequency band set needs to be built into the radio design on the phone. Consequently, there have been strict and established standards for which technology to use on which frequency band.
With the development of wireless technology it has recently become more viable to be flexible in the assignment of technologies. A concept of "refarming" frequencies has been introduced and is also increasingly endorsed by regulators. In refarming, frequencies previously allocated to one technology, like 2G GSM, are reused using a new technology, usually with the goal of providing more capacity to high speed mobile broadband services. Overall, the rationale for this process is highly dependent on both the frequency bands held by the carrier, the customer needs and the business case. As parts of the old frequency band are being refarmed, this first results in a drop in capacity (how many simultaneous users are supported), and only impacts coverage (how good and comprehensive the service is) once the process goes so far that complete bands are overtaken by the new technology. The carriers will obviously control the process in such a way that their existing subscriber base is not adversely affected.
When assessing the possible effects of 2G refarming, it should be noted that the new technologies like LTE are really not supported by many handsets yet. Each operator has a unique selection of licenses, frequencies and technologies that they need to leverage. They need to support their existing customers, so switching to a new technology that their customers' phones cannot use is not a viable option. For example, a carrier may have licenses for 850 and 1900 Mhz used by GSM, and a WCDMA license for 1700 MHz, but even the newest phones may not support WCDMA on this band, and will so rely on 2G to operate in this carrier's network. The carriers need to plan their possible refarming upgrades carefully to match the evolution of technology standards and handsets in order to minimize the negative impact on their existing subscriber base, at the same time maximizing the total return on their investments. In some countries like Finland where operator subsidies have not been commonplace, there is even less carrier control over the handsets used by the subscribers, and a significant portion of users are still using old 2G-only phones. To conclude, the machine-to-machine market that was mentioned in the beginning is largely static with slow turnover of technology. This all means there is a large existing base of devices in the current networks requiring continuing support of the traditional 2G service.
Mobile broadband services will continue to grow in popularity and carriers will need more radio spectrum in order to support the growth of these services. However, refarming of existing frequency bands is not the only alternative since new frequency bands are also being allocated. For instance, part of the radio spectrum previously used by analogue TV is going to be reallocated to wireless broadband (so called ‘digital dividend’). It can be estimated that refarming will probably have an effect on availability of 2G services in the next ten years. Nevertheless, any wireless carrier that will engage in this process will provide a graceful migration to the new technologies and is likely to make announcements in plenty of time before they see it will affect the existing GSM subscriber base. Until these announcements, partial refarming of the 2G frequencies may cause some decrease in the capacity of GSM-based packet data services, but this will have minimal or no effect on ePRO data collection due to the relatively low data bandwidth needed for collection of data from the eDiaries.
CTO, CRF Health
While providing site training at an Investigator Meeting recently, I came across some interesting comments from sites that were not familiar with using eDiaries or tablets to capture PRO or ClinRO data. One study coordinator was concerned about using eDiaries with certain populations, and was wondering if we were going to provide paper checklists or backup questionnaires on paper for subjects who were not comfortable using eDiaries. This comment brought me back to the realization that while I have been working with ePRO for over six years, there are some people who still are not familiar with the technology or the risks inherent in mixing modalities of PRO collection. I thought I would speak to some of these items, in hopes of providing a basic understanding of ePRO and some items to consider.
When choosing to collect Patient Reported Outcomes for a clinical trial, particularly for a primary or secondary endpoint, it is important to ensure the quality of the data being collected. Most of us have heard of or experienced firsthand the “parking lot syndrome”, where patients fill in dosing diary cards or other PRO questionnaires in the parking lot immediately before going in for an office visit. This is one of the main reasons sponsors choose ePRO over paper – ePRO technology allows for entry “windows” to ensure the data is being captured in a timely manner, and is not subject to the failings of memory recall. I’m lucky I can remember what I had for breakfast yesterday, let alone how I felt or at what time I took medicine!
While some people are still concerned about technology and dealing with certain patient populations – in particular, the elderly – our statistics have shown that eDiary compliance by elderly subjects is extremely high (>95%). Technology can be scary to people, but I have found that there are six key factors that improve the likelihood of success:
Success Factor 1 – Simple eDiary Design. I follow the KISS model when designing ePRO collection tools – Keep It Short and Simple. If the eDiary is too complicated, with strict entry windows and hard stops built into the eDiary, then workarounds need to be built and processes changed if a subject deviates from what is expected. While I design to help the subject follow the protocol, I try not to make the design so rigid that the programming increases frustration and reduces compliance.
Success Factor 2 - Logical Edit Checks and Information Messages. Inclusion of logical edit checks and information messages improves the quality of the data collected and provides guidance for the subject. For example, if a subject is filling in a multi-question PRO and one response prompts for additional questions, then I build in the branching logic to ask the appropriate questions on a “Yes” answer, and to skip those questions on a “No” answer. Additionally, I build in popup messages that remind the subject if he/she answers a question that is outside an expected range: for example, if a subject enters an amount of medication that is outside the expected range for the protocol, then the eDiary would remind the subject to call the doctor.
Success Factor 3 - Reminder Alarms. Reminder alarms can improve compliance and can provide “peace of mind” for subjects. If a subject is filling in multiple questionnaires daily and a separate weekly questionnaire at a different time of day, I would build in an alarm to remind the subject to fill in the daily questionnaires, and I would build in a separate alarm that would sound to remind the subject to fill in the weekly questionnaire.
Success Factor 4 – Training Material. The fourth success factor encompasses a wide area of items under the umbrella of Training. Providing detailed, hands-on training to the monitors and site personnel using a “Train the Trainer” approach helps facilitate the training subjects will receive at site. Additional support in the form of a detailed Site Manual helps reinforce the hands-on training and provides extensive troubleshooting and FAQs to ease site personnel with both their use and subjects’ use of the eDiary. Having a Training Module built into the eDiary allows the subject to practice using the device and to become familiar with the device before leaving the office. Providing easy to read, step-by-step guides for the subject to read and follow when using the eDiary helps ensure a comfortable transition from on-site support with the study coordinator to self-use at home. While these guides are most used by older subjects who are less tech-savvy, they are very helpful for all subjects. The guides should be easy to follow, include pictures of both the device and the screens to walk the subjects through everything they need to know about the device, and have a list of steps that they need to follow at any given time point. Finally, access to helpdesk support for both sites and subjects is a key “safety net” to reinforce training and to get sites and subjects “back on track”.
Success Factor 5 – One Method of PRO Collection. Sites often want “back up” methods of collecting PRO, and want to have paper questionnaires on hand in case a subject is not comfortable using ePRO. I am now seeing protocols have an inclusion criterion that states that the subject must be willing to use an eDiary. I recommend this be included in all protocols. If a study provides alternatives like paper, then we not only deal with two points of collection and the multiple processes that need to be built to ensure consistency, but we also send a mixed message to the subjects and undermine their confidence in ePRO
Success Factor 6 – Site Personnel. Whenever I am at an investigator meeting and am training sites on how to use the ePRO devices, I always encourage site personnel to be “cheerleaders” or “champions” of ePRO. I remind them that If they act frustrated with a device or appear to treat the ePRO collection tool as just one more piece of equipment that they need to deal with, this attitude is transferred to the subjects. I have seen a higher incidence of helpdesk calls, a lower rate of compliance, and a larger number of issues at sites where the site personnel appear unwilling or unmotivated during that initial training.
Conversely, I have spoken with study coordinators who are high enrollers with high compliance rates, and I have discovered that universally those study coordinators make it a point of knowing everything they can about the devices, are engaged and ask lots of questions during the training, read all of the study materials including the guides given to subjects, and motivate their subjects by saying things like, “Your data is so important to this study, that the sponsor spent all this money getting these cool devices to collect your data. You (the subject) are key to the success of this study!”
In conclusion, while we can provide all of the technology, design and support for the study, we must never underestimate the human factor, and how we present to and train our subjects on the use of ePRO. It is a team effort! The sponsor and ePRO provider can design and deliver quality ePRO to the sites, but we need the help of the doctors, nurses, monitors and helpdesk to provide seamless support and to be champions of ePRO in order to ensure subjects are successful.
ePRO Basics – Partnering with an ePRO Provider – What to Consider
ePRO Basics – Design Considerations for ePRO
ePRO Basics – Optimal Training Methods When Using ePRO
ePRO Basics – Monitoring ePRO Data: Does ePRO Data Get “Cleaned”?
Program Manager, CRF Health
This was a very interesting session. By now we’ve all heard the term “cloud” computing – even my wife, who is a dedicated anti-computer person and just recently got a mobile phone, asked me about it last week. Still, I find myself thinking "There’s no way I’m putting my personal information at risk by trusting it to something so nebulous that it’s actually named the 'cloud'". It turns out that eClinical’s main interest is really in private clouds these days – not public ones. Nick Neri of PharmaPros and Kevin Ahonen of Biogen-Idec did a great job presenting a case study of cloud technology put to good use. They very clearly described the concept of cloud computing and how it differs from simply hosted solutions. It was discussed that there is some information that is not really meant for the cloud – at least not yet. In the case study that was presented, only operational data is stored in and accessed from the cloud. What is operational data? It’s the data about the data. Metadata if you will; status values and metrics and information pertaining to how clinical data is being used. Having an external system out there that can securely access data from many distinct, “silo” locations within your company’s clinical infrastructure – in a controlled fashion and without adding burden to your existing IT system – is an idea that can meet a lot of needs. “Terminology standardization” was brought up; the need to reconcile different naming conventions across different systems. This is vital to cloud computing so that information is correctly aggregated and people know what it is they’re being presented with. As with a lot of things these days, the technology implementation is often simpler than getting people to accept new processes and open their minds to accept new ideas. This is especially true for those people who have had much success living in their very tall silo. “Don’t fix it if it’s not broken” is a valid argument unless what’s currently working needs to work within a much larger scope and cannot. Nick and Kevin made a very good argument for stepping back in order to be able to see the forest in addition to the trees. There was a lot of interest in this and a lot of interesting side conversations throughout the week.
The week wasn’t just a series of presentations, with the attendees sitting passively and listening. The eCF meetings also have a number of workshops focused on issues relevant to eClinical and designed to formulate a group consensus that can foster change in the industry or recommend best practices.
Each of these workshops broke the attendees into groups of 8 or so people per table. Tables were assigned a particular task and reported back to the entire group at the end of the breakout session. The details of these workshops are beyond the scope of this blog, but I can say that they were well run, interesting and designed such that each group had a cross-section of people with a variety of backgrounds and expertise.
There were separate workshops pertaining to:
- Site qualification of sites that intend to use electronic health records (EHR) as source data for clinical trials. Regulations require that such an EHR system meet the same requirements as others uses for eSource in a clinical trial.
- A risk-based approach to SDV
- Leveraging Electronic Health Records for clinical research
- Serious Adverse Event integration
Attendees were assigned to groups such that each table contained a wide range of experience: CRO, ePRO, Data Management, Safety, etc. This helped to ensure a thoughtful exchange of ideas and kept the discussion lively.
If you’re not currently a member of the eClinical Forum, I can highly recommend it. If you are, I hope to see you at the next meeting!
Senior Director, Technical Support
Can it really be 7 years since I attended an eClinical Forum (eCF) meeting? Back in 2004, the meeting I attended was held in Cincinnati and hosted by Proctor & Gamble. At that time, I was filling in for our regular representative who had conflicting obligations. I thought then what a good organization this was. Beginning this year I’m fortunate enough to represent CRF Health at the bi-annual eCF meetings in the US. I love this organization. It brings together a diverse group of pharma, biotech and technology provider representatives to exchange ideas and help shape the future of the eClinical landscape. Ideas appeared, conversation flowed and it seemed I could literally feel progress being made. Everyone there was genuinely interested in advancing the field of eClincial and coming up with best practices and recommendations. What I really like is the feeling of comaraderie and the lack of commercialism. I sat with competitors and clients alike and discussed issues that can improve our ability to bring better healthcare products to market faster. What a refreshing experience.
Day 1 began with presentations and discussion related to integration. Case studies were presented by Allergan, PHT and CRF Health. The overall feeling I got from these case studies is that the time is ripe from a technology perspective for real-time, transactional integrations based on web-services APIs. I commented (to some laughter) that CRF Health has had web-service APIs for a number of years now, but it was like having one “walkie-talkie”… there was no one to talk to! I believe that the maturity of this technology, together with CDISC and HL7 standards, is finally resulting in adoption rates that will rapidly transform the eClinical landscape in the next 2-3 years. I don’t say that lightly. The clinical research industry evolves and changes direction about as fast as you can turn the Titanic with a canoe paddle. In fact, when I attended my last eCF meeting in 2004, there were representatives from Siemens and GE Health forecasting the integration of Electronic Health Record systems (EHR) and clinical systems within 5 years. At last week’s eCF meeting in Newport Beach, California, the same topic was discussed in detail – and I hadn’t really detected any meaningful progress. The same major problems seem to be blocking our path, privacy concerns, lack of standard nomenclature and an effective business model that will sustain such a system beyond an ideological pilot. Still, we need to start somewhere, and real-time integration is now possible with very little capital investment. Technology is such that we no longer need to worry about the “plumbing”. We can focus on determining the best information to send through the pipes and how to best present it to the end user. One step at a time and soon we’ll be off and running. It’s apparent from these case studies that learning to walk in this environment is easier than first anticipated.
We then had a presentation and discussion of open source for eClinical software. I must admit that I remain a skeptic. To me, open source software seems best suited for foundation technologies that underlie specific needs, rather than meet those specific needs themselves. Open source operating systems seem more feasible to me than open source eClinical software. Maybe I’m just too old; working on something only to give it away for free goes against my idea of a good business model. In this regard, turning the Titanic with a canoe paddle seems to be further complicated by the need to convince the captain to even dip the paddle into the water. I’d love to hear other’s thoughts on this.
In the second part of this blog post, I will discuss Cloud Computing and the eClinical workshops that were held at the eCF Meeting.
Senior Director, Technical Support