‘We Need the Equivalent of the National Weather Service for Education Technology’
Posted by admin on
Exchanging edtech evidence with herder-of-cats Bart Epstein.
INTERVIEW | by Victor Rivero
Seriously, though—how does this sound: “A national movement to support educators and institutional leaders to make better-informed decisions about the educational technologies they select and implement.”
Pretty important, right?
Well, Bart Epstein is president and CEO of the nonprofit EdTech Evidence Exchange, and a Research Associate Professor at the University of Virginia Curry School of Education and Human Development. In these roles, Bart helps to advance exactly such a movement. How he got there is another story—and one we’ll be taking up in our conversation here.
But first, a little more background: Prior to launching the Exchange, Bart served as Founding CEO of the Jefferson Education Accelerator and its companion venture fund, which focused on helping growth-stage education companies perform real academic research to demonstrate impact on student learning outcomes. He then launched the Jefferson Education Exchange, which changed its name in August 2020 to the EdTech Evidence Exchange. Bart previously spent ten years helping to build the world’s largest online tutoring and homework help service, Tutor.com. Along the way he served as Chief Strategy Officer, General Counsel, and General Manager of the U.S. Department of Defense Online Tutoring and Homework Help Service for Military Families.
Bart earned a B.A. from the State University of New York at Binghamton and a J.D. from the University of Virginia, where he was an editor of the Virginia Law Review and a John M. Olin Scholar for Law & Economics. Bart served as a law clerk to the former Chief Judge of the United States Court of Appeals, 9th Circuit before working as a corporate attorney for Latham & Watkins.
A frequent commentator about the importance of valuing merit over marketing in the selection of education technologies for our nation’s classrooms, Bart has written for The Hill, The Hechinger Report, EdSurge, and The 74, among others.
He currently serves on the board of ASCD, the nation’s largest nonprofit, non-union, nonpartisan organization dedicated to teacher advocacy and professional development. He has also served on the boards of numerous education technology startups, nonprofits, and growth-stage companies.
In this long-form interview, we delve deep with Bart into the impetus for his early and current work; lessons from the Exchange, what the National Weather Service has to do with it, and a simple request of everyone involved with education as a consumer, user, or educator.
What prompted you to get involved with technology and education, way back when—and I mean, way back and well before 2015 — you’d been looking at this area?
When I was a little kid, my dad occasionally brought me to his office. He worked for IBM for more than thirty years, back when IBM dominated not only the marketplace but popular culture. I have fond memories of learning how to input instructions on punch cards and watching him work in a freezing cold room full of computers running magnetic tapes. When personal computers were eventually invented I wanted one so badly that I made one out of notebook paper and pretended to type on it. My parents eventually bought me one of the first Atari models and I quickly taught myself to program in the BASIC language. I remember my dad helping me write simple flashcard software to help me learn a foreign language. It was rudimentary but it helped me learn those words. My first Atari computer did not even have a disc drive. Instead, it had a cassette tape player that translated beeps and boops into computer code. Somewhere in my attic I may have the original States and Capitals program on cassette. It took nearly five minutes to load the program into memory via cassette but it unquestionably helped me master my states and capitals. (Hello, Frankfort!)
From there I found all sorts of ways to use computers to support my own learning throughout college and graduate school, where I was one of the first people to use a laptop computer to take notes. (I can probably type five times faster than I can write with a pen.)
In my first job after college, working for The Princeton Review, I saw firsthand how technology could be used to support instruction. Our students would take practice tests that our computers would not only grade but would also analyze to look for basic trends. Using simple arrays and algorithms it was not hard to see which subtopics students were struggling with. Feeding this information back to their teachers helped those teachers adjust and personalize their instruction, which almost always resulted in increased learning, higher scores on standardized tests, and happy families.
‘Feeding this information back to their teachers helped those teachers adjust and personalize their instruction, which almost always resulted in increased learning, higher scores on standardized tests, and happy families.’
Nearly a decade after starting as a teacher at The Princeton Review, my former boss there founded the first online tutoring company — Tutor.com — and invited me to join his team and help to build the company. Over the following decade while I was there, we tutored millions of students and I saw first-hand over and over again how powerful education technology can be when it properly meets the needs of students. In my time there I reviewed thousands of session transcripts and tens of thousands of post-session comments left by students who raved about how they were getting smarter and enjoying school more because they could get on-demand help from real tutors 24/7.
While I was at Tutor.com we partnered with scores of other online education providers, licensed various instructional materials, and did deep diligence on dozens of companies that we considered acquiring before we were ourselves acquired by a public company in 2012.
In 2014, The University of Virginia and Bart Epstein launch the first ever edtech accelerator focused on conducting efficacy research on participating companies’ products. How did you arrive at this point, what happened just before this? Namely, “third-party research isn’t materially influencing edtech decision-makers. Rather, decision-makers value the insights from their contextual peers, and would re-examine their processes and decisions if they could access these insights.” How did you arrive at these conclusions?
By 2014, I had become fairly expert in the emerging world of education technology. I was serving as a mentor for multiple edtech accelerators and had a front row seat to the rapid growth of numerous companies whose success clearly (to me) owed far more to their marketing prowess than the quality of their products. Around this time, some friends who served on the Board of the University of Virginia (UVA) School of Education Foundation asked me to help them design an edtech incubator or accelerator, as well as a companion venture fund.
It was around this time that I met Bob Pianta, the dean of the UVA School of Education, and learned that he was not interested in simply churning out more edtech startups to get funded, make sales, and get acquired. He cared deeply about impact and shared my frustration that so many people were getting rich peddling the edtech equivalent of junk food. When he told me that he was going to serve as Chairman of the Board of the UVA edtech accelerator I knew that I had found the perfect partner to launch an accelerator that would measure success not primarily by financial returns but by provable impact on student learning outcomes.
Over the next few years, I led the wonderfully dedicated team that ran the accelerator and provided support to a wide variety of companies whose leaders wanted to compete not just by marketing but on the basis of substance. The accelerator’s motto was “Merit, Not Marketing.” We paid out six figures of cash to cover the efficacy research costs of some of our participating companies. We had high hopes that these companies would grow faster and earn higher valuations from investors because they had real proof to back up their pedagogical claims.
The problem? Having evidence of efficacy back then was of very little value in the marketplace, and the expensive research we were funding was not helping companies grow faster or raise more money.
To me, this was a puzzle. Why would the “best” products not be winning in the marketplace? How could people be okay with tens of millions of students not learning properly?
‘Why would the “best” products not be winning in the marketplace? How could people be okay with tens of millions of students not learning properly?’
These questions and more merited further investigation, and so I started informally having conversations with teachers, administrators, superintendents, entrepreneurs, investors, philanthropists, nonprofit leaders, researchers, and government officials. What they shared was horrifying and infuriating. Most of them suspected that there was a huge problem, but everyone said it was not their fault—or their responsibility to solve the problem.
Thankfully, these people and more than a hundred others agreed to spend a year with each other doing some field research and discussion about these issues before we came together for the first EdTech Efficacy Research Academic Symposium (in 2017) where we came to several important collective realizations.
First, barely anyone was using evidence of efficacy (or information about implementation) to make purchase decisions. A small percentage of people were claiming that they cared about efficacy research but even they barely used evidence to make decisions. Why? A key reason was that there was barely any research available and the vast majority of that was not performed by independent researchers.
Second, because of the fragmented nature of our education system, it does not make financial sense for a school district to spend $3M studying a product that it might spend $300K on. Yet when 50 other districts make the same rational decision, you get a company collecting $15M in revenue without anyone knowing whether the product even works. This is called a collective action problem.
Third, there was broad agreement that in the absence of proper efficacy research – which is expensive and takes a long time to do – we could and should at least be able to better understand how various edtech tools are being used in districts around the country. But once again we ran into our old enemy, the collective action problem. Almost every school district would love to know which edtech products every other district is using and how intensively those products are being used across a wide variety of contextual environments. But no individual school district has the incentive to take the time necessary to have its people carefully document their own experiences using scores or even hundreds of different edtech products.
After activating more than 10 working groups and 300 stakeholders, your Symposium convenes in Washington, D.C., with overwhelming agreement on three fronts: context matters in edtech implementation; the current chaotic state of edtech is no single actor’s fault; and somebody needs to lead a broad research effort to understand and share what works where, and why. And that somebody was you? How did that happen?
After the symposium it was pretty clear what needed to happen next. The problem was that none of the existing education nonprofits had the interest, capacity, and bandwidth to take on the work of what would eventually become the EdTech Evidence Exchange. All that participated in the symposium said they would support and root for whomever took on the challenge but it was too big and complex for any of them to take on. And I couldn’t blame them for feeling that way. The work ahead was daunting for sure. And it really needed to be done in partnership with a major research university, because “somebody” would need to do serious academic research to help us all understand what are the factors that explain the variation in edtech implementation from one school to another.
‘The problem was that none of the existing education nonprofits had the interest, capacity, and bandwidth to take on the work…’
At that point in time, it seemed clear to me that if I did not offer to lead the work it would simply not get done. The existing nonprofits each had their own mandates and missions and focus. Everyone kept saying that these problems weren’t their fault or their obligation to fix, and they were right. The vast majority of edtech dysfunction is simply a natural consequence of our education system being fragmented into more than 13,000 school districts that are monopoly service providers of a government service. In the absence of federal rules (like we have for medicine) or federal information aggregation (as with the national weather service), the problem would continue and get worse as national spending on edtech continued to climb each year.
With the encouragement and support of the larger education ecosystem, I agreed to help launch the EdTech Evidence Exchange with the understanding that everyone would need to be involved in some way. And I am thrilled to say that the vast majority of them have been good to their word and have played an important supporting role in recent years as we have built the Exchange platform, run the EdTech Genome Project, started collecting implementation feedback from thousands of educators, and advocated for the use of evidence in education decision-making.
Trace us through the key parts of the story of the Exchange, and specifically the Genome project. This really is a unique enterprise, and few have herded cats in a more orderly fashion than what I see here. Has it been challenging? What support have you gotten from various players or key mentors or certain colleagues? What bumps have there been, and what victories?
The EdTech Genome Project was a fun and highly satisfying process, and there are more people to thank than space or time. Dan Brown, Emily Kohler, Christine Tomasik, and Kate Tindle all played huge roles as we recruited and then managed a broad and diverse group of education stakeholders to review our research and make key decisions. Dozens of organizations and companies contributed people to be part of the process through which we collectively identified the 10 most important edtech implementation factors and then created new measurement instruments to detect and quantify their presence.
Melissa Collins, Joseph South, and Verna Lalbeharie co-chaired the EdTech Genome Project’s national steering committee. They did an incredible job of building consensus amongst the three dozen widely-respected education stakeholders. Together, they decided which implementation variables merited further study and analysis, and they eventually adopted the definitions and measurement instruments created in sub-groups dedicated to each of the ten factors.
Rose Else-Mitchell, now President at Scholastic, also did a fantastic job leading our Industry Council of nearly two dozen leading education companies that were deeply involved in the EdTech Genome Project. It was very important to our team at the EdTech Evidence Exchange that the EdTech Genome Project – and our work more broadly – be done “with” industry and not “to” industry.
There were no major challenges or hiccups during the EdTech Genome Project. Everyone involved worked hard and was highly responsive and open-minded. It was really something quite special and I am forever grateful to everyone who participated.
Do you see the current state of edtech still as ‘chaotic’? What indicators are there that it may be less chaotic than it was? Or has the pandemic and remote and hybrid learning blown all that out of the water?
The market is still chaotic in many ways but there are some indicators that things are moving in the right direction. To begin with, the Every Student Succeeds Act has codified evidence requirements that “are designed to ensure that states, districts, and schools can identify programs, practices, products, and policies that work across various populations.” This law defines various tiers of evidence that buyers are supposed to demand before they spend federal recovery dollars. The Department of Education has yet to enforce this law as far as I know but the fact that it is on the books is putting real social and political pressure on districts and companies to generate and use evidence in ways that they never did before.
‘The market is still chaotic in many ways but there are some indicators that things are moving in the right direction.’
Earlier this year, Congresswoman Meng and I published an op-ed in which we laid out the case for why the federal government needs to play a major role in funding edtech efficacy and implementation research. As I mentioned above, it will simply not happen at scale otherwise — due to the fragmented nature of our education system.
The EdTech Evidence Exchange recently published a new “explainer” video in which we attempt to show how the creation of the National Weather Service is an example we need to follow if we expect to get edtech right.
The pandemic has unquestionably accelerated the transition to digital overall, but it has exposed very sharp divisions between those who have been working for years to understand how to use technology to support education – and those who found religion only when disaster struck.
What audience(s) would you like to speak to (e.g., technology company leaders, education leaders, parents) and what message(s) would you like to impart regarding tech’s role in learning, action they can take to better learning, or anything else?
I have a simple request of everyone involved with education as a consumer, user, or educator: Ask for evidence! Ask to see the studies. Ask for data about where the product works best and what lessons have other schools learned the hard way. You may think that your lone voice does not matter—but that it does. When a teacher asks a principal for evidence of efficacy and that request works its way into an RFP, it changes the behavior of vendors – in a good way.
When a parent asks a superintendent at a public session where is the evidence that the product being implemented is effective, and a teacher asks the same question, as does a local reporter—it puts pressure on the superintendent to ask the company for evidence. When enough school and district buyers ask for (or demand) evidence relating to efficacy and implementation, that changes the calculus on the company side and makes it more likely that a company will choose to invest dollars in proving that their product works—as opposed to hiring another marketing person, two social media people, and traveling to four more trade shows.
To the companies making the promising and powerful edtech promises, I would say that we are seeing a slow but steady movement of the marketplace towards companies increasingly competing on the basis of merit, not marketing. We still have a long way to go, and some of you can unquestionably skate by without doing any research, but you need to develop a plan for how you will respond if you learn that, two years ago, your main competitor started investing in research and they are about to hit you over the head with it. Doing research right takes time, and does not have to cost an arm and a leg. Companies like LearnPlatform and Empirical Education are making it easier and cheaper for companies to do rapid cycle evaluations and otherwise demonstrate their efficacy. Start small with your research. Find out in a safe way if your product does what you think. If it does, and if your competitors can’t prove the same, be aggressive and compete on quality—even if it means that you have to educate your prospects a bit about how to appreciate and interpret evidence.
How might EdTech Digest readers (tech leaders, education leaders) assist you in your mission?
Please watch our new two-minute “explainer” video and if it moves you, share it with the philanthropists that you know and the people whose edtech companies have made them rich enough that they have the capacity to give back to the field by supporting an initiative that is working to help schools learn from each other’s experiences. The EdTech Evidence Exchange is classified by the IRS as not only a nonprofit but as a public charity. We sell nothing; we take no money from edtech companies; and we pay meaningful cash stipends to thousands of teachers and administrators who agree to give us 45 minutes of their time to describe their experiences implementing specific edtech products. It is not a given that we will exist or succeed beyond our current funding. We are thrilled to do this work but need donations to keep the lights on and the implementation data flowing.
Regarding the future, you’ve written: “Imagine a world where educators can learn from their contextual peers.” So you have a plan to get there… elaborate.
Imagine a world in which there is no National Weather Service, and meteorologists in every city must make hundreds of calls each day to gather data individually from their peers in other cities to understand their temperature, barometer, and winds. That would be incredibly inefficient and would lead to forecasts far less reliable than we enjoy today thanks to the National Weather Service existing, operating satellites, and collecting data from more than 100,000 stations around the country every day.
We need the equivalent of the National Weather Service for education technology. We need to know how products are performing across thousands of districts and why those products can be a huge success in some places but barely be used in others. The information is out there if only we would gather it. Just like weather data.
‘We need to know how products are performing across thousands of districts and why those products can be a huge success in some places but barely be used in others.’
Anything else you care to add or emphasize concerning your work in edtech?
Most legislative change happens because someone has a financial interest at stake, and it makes sense for them to invest lobbying dollars to encourage Congress to pass certain legislation. The same collective action problem that I discussed before is also a key reason why there has yet to be federal legislation in this area. 13,000 individual school districts could each benefit mightily from seeing the federal government invest in education research. But none of those districts has the individual incentive to hire a lobbyist to advocate for changes that could possibly result in changes in a few years, and would benefit other districts just as much. Nobody wants to pay or do the work when others can free ride off their efforts. This is behavioral economics 101.
NEWS UPDATE: Just as this article saw publication, The EdTech Evidence Exchange and InnovateEDU announced that they are merging, aligning the work of two nonprofits committed to using evidence to improve edtech selection and implementation. Learn more.
—
Victor Rivero is the Editor-in-Chief of EdTech Digest. Write to: victor@edtechdigest.com
The post ‘We Need the Equivalent of the National Weather Service for Education Technology’ appeared first on EdTech Digest.