Forum Replies Created

Viewing 9 reply threads
  • Author
    Posts
    • #48227

      Mia Ridge
      Participant
      @mia

      Save the dates!

      We’re starting planning in earnest for the workshop event that rounds off our project. Meghan has written a ‘save the date’ post that sets out the basics and some possible topics https://collectivewisdomproject.org.uk/save-the-date-collective-wisdom-workshop-20-22-october-2021/ including:

      • Evidencing mental health and other benefits of participation in crowdsourcing, building on work on volunteering and mental health, social prescribing, etc and designing to maximise positive impact
      • Human computation systems with humanistic values, focus on volunteer experience
      • Patterns and workflows for engaging tasks that bring data or research where it’s most needed – practical realities of nice-to-have data vs hardcore discoverability
      • Learning from other practical and theoretical fields e.g. physical volunteering; CSCW; social prescribing
      • Better links between different fields of practice (volunteering, community archives, etc) and academic research (crowd science, CSCW, etc)
    • #44319

      Mia Ridge
      Participant
      @mia

      Apologies for the time it’s taken me to get back to this after our book sprint…

      To clarify my understanding – you’ll have some digital and some physical photos, and different activities around them – in-person, synchronous and co-located; and online, individual and possibly asynchronous? If so, is one of the main distinctions between the two the different levels of support from the project team and other participants during the process?

      I’ve found it a lot easier to get richer feedback on tasks during face-to-face discussions (including video calls for usability walk-throughs, where necessary), so that might mean you want more flexibility to allow for ideas that emerge during the in-person sessions with analogue photographs.

      That aside, a key question may be, do the data outputs for tasks on the analogue and digital items need to end up in the same format? And will you be asking the same questions / working towards the same goal in each case?

    • #42285

      Mia Ridge
      Participant
      @mia

      We have two new calls for participation! Find out why we’re doing these surveys and how the results will be used at https://collectivewisdomproject.org.uk/we-want-to-hear-from-you/

      To participate, follow the links below. Our ‘Case Study’ survey is designed for practitioners, while our ‘Volunteer Voice’ survey is designed for people who have taken part in crowdsourcing, citizen science, citizen history, digital / online volunteer projects. Please feel free to fill out one or both, depending on your personal experience, and share with someone else who might wish to contribute. Thank you in advance for sharing your experiences.

      Collective Wisdom ‘Volunteer Voice’ survey: https://forms.gle/vMyUbuf2CrkTfXfF9

      Collective Wisdom ‘Case Study’ survey: https://forms.gle/gZjkUtYKhLo8wGN1A

    • #41019

      Mia Ridge
      Participant
      @mia

      A question that was close to my heart this month – what advice would you give to someone in the lead up to launching an online project? What might I have forgotten to do or set up?

      And what’s different when you’re launching a new phase of a project versus launching an entirely new project?

    • #40964

      Mia Ridge
      Participant
      @mia

      I noticed this question from Nina Janz some time ago, and I’ve (finally) shared it as I think it’s reasonably common in some fields:

      ‘I am looking for any standardisations or guidelines for transcriptions (online) in e.g. #crowdsourcing projects – I would use ISAD(G) – but it includes more titles, other than full-text transcripts’

      My initial response was: ‘It depends what you want to do with the transcriptions. If you have a catalogue that you’ll ingest to in mind, you might want to work out the absolutely compulsory fields and any ‘nice to have’ fields and explain how the data will be used. There’s a balance between interesting[,] enjoyable tasks and the extra miles required for cataloguing. If you need cataloguing and that makes the work less enjoyable, the lines between volunteering and asking for professional work for free become blurred. Retired and furloughed staff also complicate the picture for now.’

      And Sam said, ‘Just to second what Mia & Ben have said, it’s really dependent on the project & type of data being transcribed and the use case for the results. E.g. letters and tabular records would need different guidelines; common abbreviations are often specific to the content, etc. It’s a bit of a cycle, as Mia notes — restrictive/rigorous standards might make for a less enjoyable experience, and not all volunteers will read lengthy instructions, but too few guidelines and you’ll wind up with results that aren’t always useful.’

      Additional replies to the original tweet also contain links to sample transcription guidelines and approaches.

    • #31328

      Mia Ridge
      Participant
      @mia

      Thinking about it, one of the challenges for people thinking about crowdsourcing ideas for the first time is understanding whether their idea is similar to established patterns, or if it’s novel.

      Platforms tend to cater to projects that match common patterns of tasks, though each has variations in how they approach it. Entirely new or novel tasks can be harder to get off the ground, as they might need bespoke development work or to work against the grain of available platforms.

      But how do you know whether your idea is novel or similar to existing tasks? Is working out how to describe it one of the challenges for people just starting out? Would better explanations of the various common, well-supported tasks help?

      I’m curious to know what you think!

    • #30871

      Mia Ridge
      Participant
      @mia

      Common platforms include:

      • The Zooniverse Project Builder
      • FromThePage
      • Scripto + Omeka
      • Pybossa

      This is only a starting point and doesn’t begin to address the strengths and affordances of each platform, or consider the other systems you’ll need around the platform to manage data going in and out.

    • #30870

      Mia Ridge
      Participant
      @mia

      The following is a bit of a brain dump of things I tend to say in conversations about crowdsourcing projects, based on my academic research and practical experience. I should really just dig out my teaching slides as they’re designed to anticipate common questions, but in the spirit of ‘the perfect being the enemy of the good’ I’m going to start here:

      • Think of crowdsourcing as a form of very structured volunteering that takes place online.
      • Crowdsourcing relies on technology but it’s actually about people. You’re entering into a relationship with people who are giving you their time and attention – please honour that.
      • Some volunteers might want a space to chat with others, others might only want to chat to you or to no-one at all.
      • Platforms often have a form of data validation built-in, but that they come with assumptions about what ‘quality’ means in your context. For example, any transcription might be better than a perfect transcription, or you might want a few people to submit exactly matching transcriptions of snippets of text. You might need keywords tags to come from a controlled vocabulary, or to be added by more than one tagger. Or those things might not really matter to you.
      • Lots of factors come into platform choices: e.g. do you have any technical support? what kinds of source material do you have? what kind of data do you want out of it? what kind of experience do you want for your volunteers? is random access to items ok or should people choose items to work on?
      • Platforms make assumptions about the world. Those assumptions might include: it’s more valid if you show people random items from a queue; items only have one part or image; items are or aren’t part of a larger narrative; transcriptions are better when someone else can help review it or chip in.
      • You can use manual methods (e.g. emailing things around) but you might be creating a rod for your own back if you later want to merge different transcriptions to create one good copy.
      • Platforms don’t have to be high-end: maybe an editable doc will do for simple transcriptions.
      • It’s important to have quite detailed conversations internally about where the data created will go. Will it be backed up and accessible across the organisation? If it’s going into a collections management system, which fields will it go into? How will the data be labelled?
      • Designing a task is a balancing act between the results you need and what people are willing to do. The more invested people are in your task, the more complicated the request you can make.
      • Different volunteers will have different preferences. The more specialist your task, the more work you’ll need to put into finding, recruiting and retaining them.
      • Think about copyright now, both for your source materials and for the data that volunteers create.
      • Some systems are all crowdsourcing, all the time, so it’s relatively easy for volunteers to find items to work on. Others are more ‘you can contribute if you can find items to work on’.
      • Volunteers appreciate upfront information about how their contributions will be checked for errors. They especially like knowing how they can fix it if they make a mistake.
      • Volunteers often make a mistake or two in their first tasks. We’re all human. Anticipate and address that fact, or just live with it.
      • Writing good tutorials, introductions and help pages takes time, and (IMHO) is best done with enough time to get some distance from it, double check it and get feedback from others.
    • #30869

      Mia Ridge
      Participant
      @mia

      Thinking back over previous conversations and unpicking some of the assumptions people bring to them, I’ve kick-started with some questions I’ve heard a few times. I’d love to know which ones resonate, and more importantly, what questions you’d add:

      • How do I manage data quality?
      • Is the overhead of picking and figuring out a platform worth it? Can I just use manual methods like email or comments instead?
      • Which platform do I choose?
      • Is it better to put everything into one task or have a few different tasks for different outputs?
      • What about vandalism or bad data?
      • How do I find people who’ll want to take part?
      • How do I manage if people in the organisation get nervous about it?
      • How much time will I need to get a project going? What steps are involved?
      • How much time will I need while a project is going? What tasks are involved?
      • How much time will I need to wrap up a project? What steps are involved?
      • How do we direct people to work that needs doing?
      • What about audio files? Video?
    • #29905

      Mia Ridge
      Participant
      @mia

      Obviously this has all been postponed – we had a backup date in October but at this stage it’s too difficult to make any definite plans.

Viewing 9 reply threads

Mia Ridge

Profile picture of Mia Ridge

@mia

Active 2 years ago