Including Campus in Technology Conversations

K.Park/ July 17, 2021/ 5 comments

ISTE Standard for Coaches 3

Coaches establish productive relationships with educators in order to improve instructional practice and learning outcomes.

  • 3c – Partner with educators to evaluate the efficacy of digital learning content and tools to inform procurement decisions and adoption.

Question

How can coaches/EdTech admins effectively and fairly evaluate current and new digital learning tools that are purchased by their institutions?

Introduction

When it comes time to choose new technologies that impact teaching and learning, how does one decide which tool is best and who’s voices are included? I think about this often in my role as an EdTech administrator, and as I prepare to evaluate some of the existing teaching & learning technologies for campus, I begin to wonder and collect resources.  I think my concerns can be summarized by the following questions: “who should help me evaluate,” “what criteria should I use,” “how should I collect responses/evaluations,” “when should we do this?”  Each question has its own unique set of answers and I know the process requires more than what I would write in this blog post, but I’m going to try to summarize my thoughts and hit on the key points that I think are important.

For the purposes below, these points would be for evaluating existing technologies or for when you’ve already identified and whittled a list down to a specific vendor(s) for providing resources – because that would be its own blog post about creating a “needs and wants” list and working within budget constraints, which in my case, often the budget constraint wins.

Who should help me evaluate?

In addition to the IT and EdTech teams coming together; even legal to review contracts, a committee, community of practice, or advisor group should usually be formed to evaluate tools, but is that everyone? “If novice LMS users are not members of the committee, how will their perspectives be obtained? How will the difficulties novices might encounter be identified? If students are not members of the selection committee, how can the committee effectively identify the LMS features that will help students complete their assignments? Some of this information can be gathered by surveys, but someone on the selection committee must advocate for students, as they are the ones most affected by an LMS implementation. (Wright, Lopes, Montgomerie, Reju, Schmoller, 2014)” If you replaced the word LMS with any other vendor name or edtech tool, this quote would still apply. 

But it’s not always going to be possible to have everyone at the table and you may not be able to get the same group for each EdTech tool used, so I think it’s also important to think about how one can consistently obtain feedback. I think groups need to be creative to and create a communication plan with other impacted areas and the broader campus to share information about what’s happening, combat low response rates to surveys by partnering with other departments to collect feedback, offer rewards, or find other ways to reach the broader campus community.  I think there also needs to be a way to recognize committee members too and hopefully compensate those who take additional time to serve their institutions in this way, because it takes time to evaluate tools and, in some cases, take an entire term in order to gather enough data.

What criteria should I use?

Western University released a Rubric for E-Learning Tool Evaluation which I really like because it organizes the main concerns about technology evaluation into a neat measurable tool.  It groups items into eight (8) main categories: Functionality, Accessibility, Technical, Mobile Design, Privacy & Rights, Social Presence, Teaching Presence, and Cognitive Presence.  With those categories, I’ll highlight some of the criteria I especially like: Tech Support/Help Availability, Functionality on a Mobile Device, and the Exploration of Data privacy for students. 

When I think about where I work and how I might use a tool like this, there are some things I would want to adjust, to better suit some of the things my supervisor and I identify as important for technology selection, but one area in particular that I want to focus on is institution direction and values.  How will these tools support those goals and in what ways does the campus need to adjust to make this happen?  What I mean by adjust, is how does this tool fit into existing systems and what work, or conversations might need to be had with other areas to implement a tool and how do you evaluate this work?  For example, is there a software component that needs to be deployed to managed computers?  If a tool is already in place, then an additional criteria component should also be added regarding whether or not the tool is meeting the desired needs and if those needs have changed, document those and make sure the parties impacted understand why things are changing.

I would also like to incorporate the work Northern Illinois University did for their LMS Review, I know that the detailed process for an LMS review may not be needed for each technology tool, but I believe there are some pieces that should be incorporated into the a modified rubric too, such as vendor up-time, disaster recovery, reporting and other how IT and EdTech administrative tasks and needs are addressed.

How should I measure responses/evaluations?

One of the challenges I have had with tool evaluation is how to factor in instructor and student feedback.  Often, the feedback is in the form of a comment and can be difficult to ascribe a value to it for ranking purposes.  After ascribing a value, I then need to find a way to weight it in terms of the other criteria used to evaluate a tool.  If instructors and students are the ones primarily impacted, then shouldn’t their feedback mean more? Last year I read an article where an IT administrator said that the primary goal is to support learning in a safe and secure environment.  If that is indeed true, then that sometimes means there’s an element of risk that’s going to be taken on by choosing any particular project.  Who gets to decide how much risk is appropriate how should that be factored in? 

I’m not sure I have a good answer for these concerns, but I do know that they should be addressed when the groups are created and before the evaluation process begins.

When should we do this?

The timing to do evaluations can be difficult.  Especially, when you work on a quarter system.  This compressed amount of time doesn’t often leave a lot of room for additional tasks.  If you need multiple departments that you partner with to help setup or deploy components of the tool, then their workflow and timeframe also need to be factored in, for example, is there a specific period of the year when updates are made to deploy new academic software or if this is a rolling process?

Generally, I think fall is the best time to convene, with a time to demo and use the tools during early winter.  This is so there’s enough time for legal to review contracts and go through any purchase request processes in time to align a tool transition with the institution’s fiscal year.  Typically, you need a certain amount of time before a contract ends to notify a vendor if you’re cancelling, then you also need to have time to introduce the new tool.  Spring can be a good time to introduce a new tool, but it can also be a time of the year when people are ready to be done, and that’s okay, because you have summer to create trainings, documentation, install software, and create any supplementary resources needed to make the transition as successful as possible for the start of fall.

Conclusion

Choosing institution wide tools is complex and the process should be planned and well documented before a need to evaluate or choose a new tool is needed. There are also many issues to think through – while an LMS review and change is the most extreme process for EdTech tools (like changing your student information system), other tools can be just as impactful on the learning experience and should have a process in place to evaluate too.

Overall, I think there are existing tools and practices that have already address and provide a way forward for evaluating technologies.  My concern then, is whether or not tools that are being selected, have been vetted appropriately for in terms of their impact on learning in the classroom. It’s not ideal to choose a tool and then realizes later, that it not going to serve a portion of students well, such as finding out that some tools that use AI facial recognition don’t work on certain skin tones.

What I hope to have happen by having a thorough process and refining existing tools, is that each tool is evaluated and that more groups on campus understand the impacts of the choices they make not just on those who need to maintain the technologies, but those who need to teach within the system and those that need to learn with the systems.  Hopefully, these evaluations would remind those who have decision making authority to focus not just finding a tool that meets your “needs and wants” list, but also on the people and the experiences that those who have had to use or will have use these technologies and can better incorporate those experiences into the evaluation and decision-making process.

Resources

Featured Photo by Jon Tyson on Unsplash

Northern Illinois University. LMS Review. https://www.niu.edu/lms-review/index.shtml

Robertson, R.J. and Park, K.J. 2021. ETM. A conversation on technology selection in the classroom. https://scholars.spu.edu/etm/2021/01/27/a-conversation-on-technology-selection-in-the-classroom/

Anstey, L. and Watson, G. 2018. A rubric for evaluating e-learning tools in higher education. https://er.educause.edu/articles/2018/9/a-rubric-for-evaluating-e-learning-tools-in-higher-education

Wright, C.R., Lopes, V., Montgomerie, C. Reju, S., Schmoller, S. 2014. Selecting a learning management system: Advice from an academic perspective. https://er.educause.edu/articles/2014/4/selecting-a-learning-management-system-advice-from-an-academic-perspective.

Share this Post

5 Comments

  1. You’re certainly asking all the right questions, and I greatly appreciate your commitment to making sure all impacted folk have a seat at the table! The ‘rubric’ criteria you reference from Western Washington feels particularly relevant and helpful and I hope you’re able to lean on this work in your upcoming evaluations.

    1. *Sorry, I meant to reference Western University not Western Washington

  2. Good job! I like you emphasis on evaluations and the need for “those with decision making authority” to carefully consider the tools before spending lots of money whereby teacher “have” to make something work or the money is wasted on useless software. Anyway, thanks for your insight. I enjoyed this!

  3. Enjoy your post, the Rubric for E-Learning you introduce is super useful. Able to carefully assist the teacher in reviewing the selected digital tools. But I just wonder when you said L”ist and working within budget constraints, which in my case, often the budget constraint wins.” How you balance what you need and the budget?

    1. It’s always tough with budget constraints. It’s one of the challenges that I face each year when looking at new tools, but also what we already have. Some of the strategies I use are to prioritize highest/greatest impact on student learning tools…like an LMS or a campus wide video management/creation system. Then I would go with other tools that support the strategic direction of the institution or values. Sometimes this means I can’t get tools that I think would be really beneficial though – I just don’t have the funding, so I might encourage instructors to try similar strategies in tools that are already available or are free to use. Sometimes being patient for a company and their road map can pay off too – like Canvas is going to do a Discussion overhaul and there will be a slew of new features coming. It’s a tough balance.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.