Skip to content
Ready to transform your business?
Our engineering experts will help you find a
tailored set of solutions.

Insights

  BACK TO BLOGS

8 Ways to Ensure Trust when Deploying Artificial Intelligence (AI) on Campus

by - Divyatej Raghu | August 20, 2020 |

Artificial Intelligence (AI) based solutions are becoming more widespread on campus, offering the potential to address many of the longstanding challenges in Higher Ed from student success, access to financial aid, academic advising to departmental productivity, just to name a few.

 

The power of these new solutions is rooted in access and availability of data. Solutions that service students effectively from admission to graduation must have access to personalized student data and university information. AI solutions can then be developed that aligns with the student’s interests, monitors progress and matches him/her to right university resources.

 

From our experience in deploying AI, we understand how personalization impacts trust and the ultimate success of AI on campus. Universities and students alike need to be assured that their data is secure and not being used for unintended purposes. This is a broader trend across industry and we’re now seeing federal and state legislatures grapple with AI driven issues around data usage.

 

Well-developed AI solutions for Higher Ed integrate existing learning management systems, student information systems and campus platforms, like Blackboard, PeopleSoft and Banner. It’s no surprise that FERPA compliance, privacy, and security, are all issues we face on a regular basis. An information breach in any of these systems would have negative ramifications for students, administrators, schools and as well as our own brand.

 

 

Important points to consider:
Here are some major lessons we have learned from our initial roll-outs of AI solutions in Higher Ed. They are applicable to any campus deploying an AI solution and provide insights on issues related to data privacy, security and overall trust.

 

1. With great power comes great responsibility – if it’s legal, it doesn’t mean it’s ethical, nor considered acceptable by the user. For example, administrators requested a “find my student” capability to remind them of class attendance. We advised that unless this is a specific opt-in by the student in bold capital letters, it would result in an immense loss of trust in the school and the product.

 

2. Ownership – who owns the information/interaction with AI? Is it the institution? Or does the student have a claim? Establishing ownership upfront will define these parameters. Also, effective deployment of AI requires services from other providers like Google, Azure and other cloud services. Ensuring you have use of data agreements in place can prevent situations which result in a loss of user trust.

 

3. Collaborative input reduces fear of the unknown – we sought feedback from the stakeholders about their challenges. This one-to-one relieved the concerns from advisors that the “robot” would replace them. Score two for building trust and a cadre of potential evangelists for AI.

 

4. The option of choice is empowering – students were given all the choices: opt-in to the system, set up their individual profiles with as much or as little information as possible, select which types notifications and the frequency they were received and who else could share information e.g. parents.

 

5. Prevention is better than cure – We ensure multiple authentications and limited access to information on insecure locations/devices. For example, most smart speakers could not support authentication or were in public place so a smart speaker user cannot access personal student info. Clients understand that no system is failsafe but knowing we did everything possible to secure their interests was enough to also secure their trust.

 

6. Integrate AI with current “trusted apps” – New AI solutions can be deployed on campus by integrating with the school’s mobile app to create a seamless user experience and to extend the institution’s brand. We encourage this approach because familiarity accelerates user adoption rates. What is familiar is relatable, and it’s more easily welcomed into one’s life – a great lesson in building trust. A caveat – make sure the new AI solution aligns with the target app in terms of usability and functionality.

 

7. Peer influencers build trust – video demos on the school channels fueled adoption rates especially since today’s college students prefer this media and will share extensively with networks.

 

8. Manage expectations – Let people know what to expect up front – AI has a learning curve. It responds with rapidly increasing accuracy as the AI learns from interactions. No AI system can ever know the universe of questions, but campus AI apps will have a firm handle on at least 80% to 90% of information required. People trust you when you tell them the truth upfront and always.

 

It’s interesting to note the differences between generations how personalization is perceived. Generation Z welcomed Artificial Intelligence. Research from the California universities showed that this group prefers self-service, AI, and a high level of personalization).

 

 

Conclusions:
Lessons from Facebook fake accounts and bots which profiled and fed false information on social media should make us wary enough. We’re now seeing greater legislation, penalties and oversight bodies in place to regulate, not just how to secure data to personalize, but how all the additional data we gather is used. A company with less integrity may decide to sell data where students in a particular locale dine, party or shop. (We have our own “Use of Data” clauses as an assurance to our clients and all potential buyers should ask to see these policies).

 

Personalization technology is likely to become more pervasive or intrusive depending on your point of view. Whether we embrace it with enthusiasm or restrict its use with caution, each Higher Ed institution will need to make that decision. Following the guidelines provided above will help ensure your AI deployment strikes the right balance between information access and privacy/security to achieve trust.

Author

  • Divyatej Raghu

    As Business Head, Higher Ed and Public Sector, DivyaTej (DT) leads client projects in higher education, government, and the non-profit sector. Passionate about innovation and digital transformation, DT’s technical expertise spans enterprise applications, large data systems, machine learning, AI, and the Cloud. With a particular flair for launching new services and products, DT is skilled at building businesses from the ground up. His hands-on leadership style and ability to seamlessly connect have brought numerous ThoughtFocus initiatives to successful fruition. Among his many achievements, DT built and launched the revolutionary AI-based chatbot, YANA.

Divyatej Raghu

Global Talent Head, Business Head, Higher Ed & Public Sector

As Business Head, Higher Ed and Public Sector, DivyaTej (DT) leads client projects in higher education, government, and the non-profit sector. Passionate about innovation and digital transformation, DT’s technical expertise spans enterprise applications, large data systems, machine learning, AI, and the Cloud. With a particular flair for launching new services and products, DT is skilled at building businesses from the ground up. His hands-on leadership style and ability to seamlessly connect have brought numerous ThoughtFocus initiatives to successful fruition. Among his many achievements, DT built and launched the revolutionary AI-based chatbot, YANA.