Submit a Challenge

Timeline

Submitting the interest form tentatively secures your spot. Chris will follow up on a rolling basis to give an initial go/no-go on your project concept. If your project gets the green light, the timeline below kicks in. Missing a deadline puts your spot at risk.

  • May 22— Interest form deadline. Submit the interest form to secure your spot. Chris will review your concept and follow up.
  • Early June — One-on-one check-in with Chris. Initial Kaggle draft due — doesn’t need to be complete. See Launching Your Challenge on Kaggle for guidance.
  • Mid July — Full organizer group meeting. Majority of data cleaned and uploaded; metric defined and tested, even if still being refined.
  • August 1 — Full draft complete: problem description, all data uploaded, metric finalized, sample submission ready.
  • August 17 — Projects announced and participant registration opens.
  • September – December — Marathon runs. Weekly sprint events, Wednesdays 4:30–6:30 pm.

August 1 is a firm deadline. Challenges submitted after this date cannot be guaranteed inclusion in MLM26. If you’re unsure whether your dataset will be ready in time, reach out early — we’d rather help you scope things down than miss the window.

What Makes a Strong Challenge?

All submitted challenges must meet three core requirements:

  1. A well-defined ML problem. Teams need a clear target — a variable to predict, a class to detect, an anomaly to find. Vague goals (“explore this dataset”) are hard to evaluate and frustrating to work on. Define what a good solution looks like before submitting.
  2. A measurable, quantitative outcome. Kaggle competitions require a concrete metric (accuracy, RMSE, F1, AUC, etc.) and a held-out test set for automated scoring. The metric should reflect real-world impact, not just leaderboard performance.
  3. Shareable data. Data must be uploadable to Kaggle and accessible to all registered participants. If your real dataset is sensitive or proprietary, a de-identified or synthetic analog is acceptable — as long as it preserves the ML problem structure and is ready by August 1. Data with unresolved IRB restrictions or proprietary constraints is not eligible.

We also encourage organizers to include a brief note on recommended prerequisites (e.g., “familiarity with image classification helpful”) so participants can self-select appropriately. And while it’s not required, organizers who drop in during sprint events consistently see better outcomes from their teams.

How we select the final challenge set

Meeting the requirements above is necessary but not sufficient for inclusion. MLM aims to represent the breadth of AI/ML-adjacent work happening at UW–Madison, so we curate the final set of challenges to balance two kinds of diversity:

  • Application areas — such as biomedical and health, ecology and environment, social science, engineering, language and text, and industry.
  • AI/ML methods — such as computer vision, NLP, time-series forecasting, tabular modeling, anomaly detection, and generative approaches.

A well-prepared challenge can still be waitlisted if the portfolio already has strong coverage in that area. We’ll always communicate the reason, and strong submissions are considered for future years. Submitting early gives us more flexibility.

A note on prizes: MLM is a free, educational competition — there are no cash prizes. The reward is the experience: working with a capable team, getting feedback from domain experts, and building something real. We’ve found this attracts participants who are genuinely curious, which tends to make for more thoughtful and creative submissions.

Launching Your Challenge on Kaggle

We use Kaggle’s free prediction competition format for all MLM challenges. This means your challenge is visible to Kaggle’s global community of millions of practitioners — not just UW–Madison participants — giving your problem and dataset broad exposure. You can get started on your draft right after submitting the interest form.

  1. Create a new competition draft. Go to Kaggle → Create → Competition and select Machine Learning Competition (not a dataset or notebook). See the Kaggle competition setup docs for full instructions.
  2. Add Chris as a collaborator for review. In Settings → Collaborators, add Kaggle username qualiamachine. Add him whenever you’re ready for initial feedback, or by July 15 at the latest. You can remove him after the review is complete.
  3. Submit your draft by August 1. Your draft should be substantially complete: problem description written, data uploaded, metric defined, and a sample submission file ready. It doesn’t need to be perfect — that’s what the review is for.
  4. Join the ML+X Kaggle organization (we’ll coordinate this together). Once your challenge is selected, we’ll add ML+X as the host organization. You can use the mlx-community invite link to join, or email Chris at endemann@wisc.edu and he’ll get you added. You’ll then select ML+X under “Creating as” in your competition’s General Settings — this affiliates the challenge with our community but does not change your editing permissions.

Frequently Asked Questions

This is an accordion element with a series of buttons that open and close related content panels.

Do I need to be a machine learning expert to submit a challenge?

No. Many of our best challenge organizers are domain experts — ecologists, clinicians, engineers — who bring the data and the problem, and lean on Chris and the ML+X community for technical setup. You don’t need to know Kaggle before reaching out. Email endemann@wisc.edu with any questions.

My dataset isn't ready yet. Should I still fill out the interest form?

Yes — fill out the form and describe where things stand. If your data still needs significant cleaning, labeling, or IRB work, that needs to be resolved before August 1. Chris can help you estimate the prep effort and decide whether the timeline is feasible.

My data is sensitive or proprietary. Can I still participate?

Possibly. All data hosted on Kaggle must be shareable with participants. If your real dataset can’t be shared, a de-identified or synthetic analog may work — as long as it preserves the structure of the ML problem. Reach out early if you’re in this situation; clearing a data-sharing path takes time.

Will my challenge automatically be included?

Not necessarily. Beyond meeting the core requirements, we curate the final challenge set to reflect the diversity of AI/ML work at UW–Madison, balancing application areas and AI/ML methods. A well-prepared challenge can still be waitlisted if we already have strong coverage in that area. We’ll always explain why, and strong submissions are considered for future years.

Are multi-stage or two-phase competitions possible?

Unfortunately, no. Multi-stage competitions require Kaggle’s paid plan, which we don’t use. All MLM challenges use Kaggle’s free prediction competition format: a single leaderboard scored against a held-out test set.

Do I have to advise or mentor teams during the Marathon?

It’s strongly encouraged but not required. Organizers who show up at sprint events (Wednesdays, 4:30–6:30 pm, September–December) consistently see better results from their teams. Even occasional appearances make a real difference. If you’re interested in a more formal advisor role, you can note that on the interest form.

What should my Kaggle competition page include?

See last year’s challenges for examples. Your page should include: a plain-language overview of the problem, a description of the data (fields, size, format), the evaluation metric and why it was chosen, and any recommended prerequisites or starter resources. A starter notebook is a nice bonus but not required.

Can a company or industry lab submit a challenge?

Yes, but industry partners are required to sponsor ML+X to participate as challenge organizers. Sponsorship starts at $2,500 and includes the ML Marathon challenge as one of several benefits. See the ML+X sponsorship page for details or email endemann@wisc.edu to get started.

What's the time commitment for a challenge organizer?

The bulk of the work happens before the Marathon: scoping the problem, preparing the data, and drafting the Kaggle page (due August 1). There’s no required commitment during the Marathon itself (September–December) — though we hope you’ll drop in when you can as domain expert or ML/AI advisor.

Ready to Get Involved?

Fill out the interest form to submit a challenge, sign up as an advisor, or volunteer. We’ll be in touch in June to schedule a brief check-in. Questions? Email endemann@wisc.edu.

Submit Interest Form

Past Challenges & Artifacts