Jump to content

Grants:IdeaLab/Future IdeaLab Campaigns/Results

From Meta, a Wikimedia project coordination wiki

Future IdeaLab Campaigns

Results

Summary

[edit]

From 4 December 2015 to 5 January 2016, participants were asked to suggest topics and voice preferences for upcoming IdeaLab campaigns. Using AllOurIdeas as a means of collecting data on preferences and suggestions for these topics, approximately 90-100 participants[1] from over 25 countries[2] submitted a combined 1700 votes comparing across 45 topics (33 submitted by participants, 12 seeded with the initial survey). Outcomes of our results are as follows:

  1. A number of compelling topics for future IdeaLab campaigns were identified. Some campaign topics that participants preferred included:
    • volunteer engagement and motivation,
    • accessibility & use of multimedia content in projects,
    • content curation,
    • engaging partnerships / experts,
    • improvements in contributing to or use of Wikidata content,
    • Addressing abuse / harassment, and
    • API improvements
  2. Content curation will be the focus of the next IdeaLab campaign to start end of February. This decision was made on the basis of five (5) campaign topics receiving moderate to strong preference that were related to improving content review and content maintenance processes. The Community Resources team is eager to support volunteer efforts aimed at ensuring and raising the quality of content across Wikimedia projects.
  3. IdeaLab campaigns on these topics and others will be held quarterly twice a year. Campaigns will last approximately one month each, and will generally be scheduled preceding open calls for the upcoming Project Grants designed as a part of the recent Reimagining WMF grants consultation.[3]

Background

[edit]

The Future IdeaLab Campaigns Survey was launched in early December 2015, and ran for one month. It was targeted across many projects including Wikipedia, Commons, Wikisource, Wikidata, and Wiktionary across several languages including French, Italian, Russian, Spanish, and English. Contributors on mailing lists for Wikimedia India and CEE were also contacted.

The survey asked community members "Which IdeaLab campaign topic do you prefer?"

Please refer to the AllOurIdeas page for complete results

AllOurIdeas provides a score for each option based on preferences from survey participants. The score answers the following question: If this campaign idea was randomly paired with another idea from this survey, what percentage of the time would it be preferred? In addition the score by clicking on the idea itself, AllOurIdeas will show how many completed contests it has gone through, which indicates how many times it was compared to another idea. Lower values in completed contests tend to produce more extreme scores.

Results

[edit]
Shortlist of topics / campaign ideas
Topic Broader campaign idea Score[4] # of contests[5]
Accessibility to and use of multimedia content in projects N/A 45 168
Developing bots for routine maintenance tasks Content curation 48 172
Strategies for engaging with and motivating project volunteers N/A 59 162
Improvements in contributing to or use of Wikidata content N/A 53 161
Developing or improving content review or curation processes Content curation 59 157
Workflows or tools for editing and maintenance tasks Content curation 60 150
Strategies and tools to handle cases of long-term abuse Abuse / Harassment 45 142
Addressing harassment of Wikimedia project contributors Abuse / Harassment 36 136
Tools to help experts on a subject matter advise on that subject matter,
so that editors with less expertise can make better decisions
Engaging partnerships / experts 61 117
Engaging outside knowledge networks (libraries, educators, etc),
in novel participation strategies
Engaging partnerships / experts 64 116
Establishing Wikimedia groups in universities (e.g. through student organizations) Engaging partnerships / experts 72 27
Building the next generation of tools using Wikimedia APIs (Application Programming Interface) API improvements 75 22
Improving maps and other location-based multimedia API improvements 71 32


Raw idea list with score and # of contests
Campaign idea Score[4] Completed contests[5]
Improve abuse filter 85 11
Building the next generation of tools using Wikimedia APIs (Application Programming Interface) 80 28
Establishing Wikimedia groups in universities (e.g. through student organizations) 74 32
Improving maps and other location-based multimedia 73 35
Improvements in enabling content creation 67 181
Engaging outside knowledge networks (libraries, educators, etc), in novel participation strategies 65 120
Improving translation quality and translation request organization for Wikimedia project content 65 29
Scalability of Wikisource book transcription process to millions of books 63 62
Workflows or tools for editing and maintenance tasks (content curation) 60 150
Developing or improving content review or curation processes (content curation) 59 157
Tools to help experts on a subject matter advise on that subject matter,
so that editors with less expertise can make better decisions
59 120
Strategies for engaging with and motivating project volunteers 59 166
Usability effectiveness and main missing features in Wikimedia Commons media search
versus Flickr, Google Photo and others
58 64
Improving addition and maintenance of references (content curation) 55 159
Improvements in contributing to or use of Wikidata content 53 161
Mass content adding on Wiktionary and effects on active editors community size 52 73
Increasing participation from underrepresented groups 50 169
Implementation of speech synthesis (e.g. similar to Loquendo) to read Wikipedia articles aloud 50 0
Developing bots for routine maintenance tasks (content curation) 48 172
Correcting systematic bias in article content (content curation) 48 164
Wikidata and Wikipedia data quality improvement following embedding of Wikidata data on Wikipedia articles 47 64
Accessibility to and use of multimedia content in projects 46 169
Improving implementation / user notification systems behind templates for uploaded files 45 31
Wiktionary editing as a tool for language learners 45 51
Strategies and tools to handle cases of long-term abuse 45 147
Participation games and microcontributions 44 140
Wikibooks and Wikiversity compared to other Open Educational Resources platforms 42 69
Creating tools to ameliorate the status of works about Medicine in different Wikipedias 38 121
Increasing prestige of Wikipedia, Wikisource and Wiktionary in smaller languages such as Indic languages 38 78
Wikidata editing suitability for subsequent involvement of new editors
in more complex editing e.g. on Wikipedia
37 68
Gender-gap solutions 36 51
Driving traffic to sister projects via interproject links, interproject and interlanguage search 35 63
Developing tools that encourage contributions from anonymous editors 34 36
Suitability of Wikisource, Wikiquote and other sister projects for women, minors and other minorities 28 65
Methods, tools and manuals on how to add projects
to Wikimedia Foundations classifier for revision analysis (ORES)
26 72
Effects on participation and efficiency of using many private wikis instead of Meta-Wiki 10 38

Additional notes and observations

[edit]
  • Several submissions to the AllOurIdeas survey were better suited as specific proposals or ideas rather than a theme for an IdeaLab campaign. In these cases, we identified a larger theme for which the idea would sensibly fit under. For instance, "Improve Abuse Filter" was considered too specific, but would fit in a larger theme of how to better address/prevent disruptive editing behavior.
Conversely, some submissions were too broad in scope, and would benefit from some narrowing so that a strategic problem or need within Wikimedia projects can be addressed. For instance, "Improvements in enabling content creation" (which admittedly, I, User:I JethroBT (WMF), initially seeded into the survey) could be interpreted to apply to most efforts that contributors to Wikimedia projects engage in.
Consequently, the above chart is sorted roughly by # of contests for ideas consistent with themes we identified from all submissions. The raw list of submissions and scores can be viewed on AllOurIdeas, and is also in the collapsed table above.
  • Some ideas were added later in the survey, and did not get the benefit of being voted for (or against) very often. These scores tended to be more extreme and is based on the way AllOurIdeas calculates score.
  • AllOurIdeas provides anonymized information on participants and their behavior in the survey. Review of this data shows behavior consistent with attempting to game the system by, for instance, selectively voting for or against specific topics while ignoring most other topics via skipping. These votes were discounted from analysis. Topics that were most frequently subjected to this behavior were:
    • Workflows or tools for editing and maintenance tasks,
    • Correcting systematic bias in article content,
    • Developing bots for routine maintenance tasks,
    • Accessibility to and use of multimedia content in projects,
    • Improvements in enabling content creation, Developing or improving content review or curation processes
    • Strategies and tools to handle cases of long-term abuse, and
    • Tools to help experts on a subject matter advise on that subject matter
In total, these invalid votes represented about 7% of the 1880 votes cast.
  • One technical limitation in AllOurIdeas was that descriptions were limited to 140 characters that could not contain any external links. As a result, users were sometimes presented with choices that they did not recognize and could not easily get more information about.

Notes

[edit]
  1. This is an estimate; AllOurIdeas provides information on unique sessions per day (of which there were 111). Participants are allowed (and encouraged) to take the survey multiple times, in part because new ideas were submitted throughout the course of the consultation.
  2. AllOurIdeas voting map
  3. Grants:IdeaLab/Reimagining_WMF_grants/Outcomes.
  4. a b Score is a percentage reflecting how often the idea would be preferred if it were randomly paired with another idea from this list.
  5. a b # of contests refers to the total number of times the idea was actually compared with another idea across participants.