The MS-CC Environmental Science Hackathon at West Virginia State University (WVSU) invites students to tackle real-world sustainability and rural resilience challenges using data science, AI, and open-source tools. Led in partnership with ESIIL, this hackathon challenges students to think like cybersecurity professionals responsible for protecting real-world data systems that communities rely on.
Participants will work with data from a distributed network of low-cost air quality sensors. These types of networks are increasingly used in rural and under-resourced regions, including parts of West Virginia, where traditional monitoring infrastructure is limited. Public agencies, farmers, schools, and residents may rely on this data to make health, environmental, and operational decisions.
However, these systems are vulnerable to subtle cyber threats. Sensors can report faulty data, APIs can be misused, and decision systems can be misled by incomplete or manipulated information.
In this hackathon, students will investigate how those failures happen and design ways to detect and prevent them.
Across two days, students will:
- Learn how to access and analyze real-world air quality data using APIs and time-series workflows;
- Work in teams to investigate how cyber and data integrity threats affect distributed sensor networks;
- Design and test a sensor trust model that detects unreliable or compromised data sources;
- Propose practical cybersecurity defenses to improve the resilience of environmental monitoring systems;
- Present their solutions, with awards recognizing technical rigor, defensive strategy, and clarity of reasoning.
This hackathon builds capacity for the future cybersecurity and data science workforce by placing students in realistic scenarios where systems must function under uncertainty. Participants will gain hands-on experience securing distributed data pipelines similar to those used in public health, environmental monitoring, and IoT systems.
Top projects may be highlighted at MS-CC events and connected to future opportunities, including internships, research collaborations, and continued mentorship.
Requirements
Submit the following:
1. PowerPoint Slides
2. Code (Jupyter Notebook, Python)
3. Documentation of how AI was used
4. Pictures or Videos
5. Additional Materials (Optional)
Prizes
Winner
Winner of MSCC WVSU HACKATHON
Devpost Achievements
Submitting to this hackathon could earn you:
Judges
Ali Alsinayyid
Judge
James Sanovia
Judge
Ramadan EL Sharif
Judge
Jeff Choi
Judge
Lilly Jones
Judge
Pallab Chatterjee
Judge
admin
Judge
Judging Criteria
-
Code Cleanliness
1 = Poor (Code difficult to read or run; minimal comments; unclear structure) · 3 = Competent (Code runs but limited organization or documentation) · 5 = Excellent (Clean, well-organized code with clear variable names, comments, and reproducible workflow) -
Trust Score
1 = Poor (Trust score unclear or unsupported) · 3 = Competent (Trust score calculated but limited justification) · 5 = Excellent (Clear, well-designed trust score with logical metrics and explanation) -
Stress Testing
1 = Poor (Little or no testing of trust score robustness) · 3 = Competent (Some stress testing attempted but limited analysis) · 5 = Excellent (Thorough stress testing with clear interpretation of how the trust score behaves) -
Defense of PurpleAir from Cyber Attacks
1 = Poor (Few or unrealistic mitigation strategies) · 3 = Competent (Some defensive ideas presented but not well justified) · 5 = Excellent (Thoughtful and practical strategies for detecting or preventing attacks) -
Use of LLM
1 = Poor (LLM used minimally or without explanation) · 3 = Competent (LLM used but contribution unclear) · 5 = Excellent (LLM used thoughtfully (e.g., code generation, analysis, documentation) and clearly described) -
Interpretation of Results
1 = Poor (Limited interpretation or incorrect conclusions) · 3 = Competent (Basic interpretation of outputs) · 5 = Excellent (Insightful interpretation connecting results to air quality data and cybersecurity risks) -
Attack detection
1 = Poor (Incorrectly identify simulated attack) · 3 = Competent (Correctly identify attack but reasoning is incorrect) · 5 = Excellent (Correctly identify attack and provide correct reasoning) -
Final Presentation
1 = Poor (Disorganized, over time, limited team participation) · 3 = Competent (Mostly clear but uneven pacing or participation) · 5 = Excellent (Clear, engaging presentation under 10 minutes with shared speaking and strong visuals)
Questions? Email the hackathon manager
Tell your friends
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.



