Reader Studies

A reader study enables you to have a set of readers answer a set of questions about a set of images.

Editors
You can add multiple editors to your reader study. An editor is someone who can edit the reader study settings, add other editors, add and remove readers, add images and edit questions.
Readers
A user who can read this study, creating an answer for each question and image in the study.
Cases
The set of images that will be used in the study.
Hanging List
How the each image will be presented to the user as a set of hanging protocols. For instance, you might want to present two images side by side and have a reader answer a question about both, or overlay one image on another.

Creating a Reader Study

A ReaderStudy can use any available Workstation. A WorkstationConfig can also be used for the study to customise the default appearance of the workstation.

Cases

Cases can be added to a reader study by adding Image instances. Multiple image formats are supported:

  • .mha
  • .mhd with the accompanying .zraw or .raw file
  • .tif/.tiff
  • .jpg/.jpeg
  • .png
  • 3D/4D DICOM support is also available, though this is experimental and not guaranteed to work on all .dcm images.

Defining the Hanging List

When you upload a set of images you have the option to automatically generate the default hanging list. The default hanging list presents each reader with 1 image per protocol.

You are able to customise the hanging list in the study edit page. Here, you are able to assign multiple images and overlays to each protocol. A main and secondary image port are available. Overlays can be applied to either image port by using the keys main-overlay and secondary-overlay.

Questions

A Question can be optional and the following answer_type options are available:

  • Heading (not answerable)
  • Bool
  • Single line text
  • Multiline text

The following annotation answer types are also available:

  • Distance measurement
  • Multiple distance measurements
  • 2D bounding box

To use an annotation answer type you must also select the image port where the annotation will be made.

Adding Ground Truth

To monitor the performance of the readers you are able add ground truth to a reader study by uploading a csv file.

If ground truth has been added to a ReaderStudy, any Answer given by a reader is evaluated by applying the scoring_function chosen for the Question.

The scores can then be compared on the leaderboard. Statistics are also available based on these scores: the average and total scores for each question as well as for each case are displayed in the statistics view.

grandchallenge.reader_studies.models.ANSWER_TYPE_SCHEMA = {'$schema': 'http://json-schema.org/draft-07/schema#', 'anyOf': [{'$ref': '#/definitions/null'}, {'$ref': '#/definitions/STXT'}, {'$ref': '#/definitions/MTXT'}, {'$ref': '#/definitions/BOOL'}, {'$ref': '#/definitions/HEAD'}, {'$ref': '#/definitions/2DBB'}, {'$ref': '#/definitions/DIST'}, {'$ref': '#/definitions/MDIS'}, {'$ref': '#/definitions/POIN'}, {'$ref': '#/definitions/MPOI'}, {'$ref': '#/definitions/POLY'}, {'$ref': '#/definitions/MPOL'}, {'$ref': '#/definitions/CHOI'}, {'$ref': '#/definitions/MCHO'}, {'$ref': '#/definitions/MCHD'}], 'definitions': {'2DBB': {'properties': {'corners': {'items': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'maxItems': 4, 'minItems': 4, 'type': 'array'}, 'name': {'type': 'string'}, 'type': {'enum': ['2D bounding box']}}, 'required': ['version', 'type', 'corners'], 'type': 'object'}, 'BOOL': {'type': 'boolean'}, 'CHOI': {'type': 'number'}, 'DIST': {'properties': {'end': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'name': {'type': 'string'}, 'start': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'type': {'enum': ['Distance measurement']}}, 'required': ['version', 'type', 'start', 'end'], 'type': 'object'}, 'HEAD': {'type': 'null'}, 'MCHD': {'items': {'type': 'number'}, 'type': 'array'}, 'MCHO': {'items': {'type': 'number'}, 'type': 'array'}, 'MDIS': {'properties': {'lines': {'items': {'allOf': [{'$ref': '#/definitions/line-object'}]}, 'type': 'array'}, 'name': {'type': 'string'}, 'type': {'enum': ['Multiple distance measurements']}}, 'required': ['version', 'type', 'lines'], 'type': 'object'}, 'MPOI': {'properties': {'name': {'type': 'string'}, 'points': {'items': {'allOf': [{'$ref': '#/definitions/point-object'}]}, 'type': 'array'}, 'type': {'enum': ['Multiple points']}}, 'required': ['version', 'type', 'points'], 'type': 'object'}, 'MPOL': {'properties': {'name': {'type': 'string'}, 'polygons': {'items': {'$ref': '#/definitions/polygon-object'}, 'type': 'array'}, 'type': {'enum': ['Multiple polygons']}}, 'required': ['type', 'version', 'polygons'], 'type': 'object'}, 'MTXT': {'type': 'string'}, 'POIN': {'properties': {'name': {'type': 'string'}, 'point': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'type': {'enum': ['Point']}}, 'required': ['version', 'type', 'point'], 'type': 'object'}, 'POLY': {'properties': {'groups': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}, 'path_points': {'items': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'type': 'array'}, 'seed_point': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'sub_type': {'type': 'string'}}, 'required': ['name', 'seed_point', 'path_points', 'sub_type', 'groups', 'version'], 'type': 'object'}, 'STXT': {'type': 'string'}, 'line-object': {'properties': {'end': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'name': {'type': 'string'}, 'start': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}}, 'required': ['start', 'end'], 'type': 'object'}, 'null': {'type': 'null'}, 'point-object': {'properties': {'name': {'type': 'string'}, 'point': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}}, 'required': ['point'], 'type': 'object'}, 'polygon-object': {'properties': {'groups': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}, 'path_points': {'items': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'type': 'array'}, 'seed_point': {'items': {'type': 'number'}, 'maxItems': 3, 'minItems': 3, 'type': 'array'}, 'sub_type': {'type': 'string'}}, 'required': ['name', 'seed_point', 'path_points', 'sub_type', 'groups'], 'type': 'object'}}, 'properties': {'version': {'additionalProperties': {'type': 'number'}, 'required': ['major', 'minor'], 'type': 'object'}}}

Schema used to validate if answers are of the correct type and format.

class grandchallenge.reader_studies.models.Answer(*args, **kwargs)[source]

An Answer can be provided to a Question that is a part of a ReaderStudy.

exception DoesNotExist
exception MultipleObjectsReturned
api_url

API url for this Answer.

calculate_score(ground_truth)[source]

Calculate the score for this Answer based on ground_truth.

csv_values

Values that are included in this Answer’s csv export.

save(*args, **kwargs)[source]

Save the current instance. Override this in a subclass if you want to control the saving process.

The ‘force_insert’ and ‘force_update’ parameters can be used to insist that the “save” must be an SQL insert or update (or equivalent for non-SQL backends), respectively. Normally, they should not be set.

save_without_historical_record(*args, **kwargs)

Save model without saving a historical record

Make sure you know what you’re doing before you use this method.

static validate(*, creator, question, answer, images, is_ground_truth=False, instance=None)[source]

Validates all fields provided for answer.

class grandchallenge.reader_studies.models.CategoricalOption(id, question, title, default)[source]
exception DoesNotExist
exception MultipleObjectsReturned
class grandchallenge.reader_studies.models.Question(id, created, modified, reader_study, question_text, help_text, answer_type, image_port, required, direction, scoring_function, order)[source]
exception DoesNotExist
exception MultipleObjectsReturned
api_url

API url for this Question.

calculate_score(answer, ground_truth)[source]

Calculates the score for answer by applying scoring_function to answer and ground_truth.

clean()[source]

Hook for doing any extra model-wide validation after clean() has been called on every field by self.clean_fields. Any ValidationError raised by this method will not be associated with a particular field; it will have a special-case association with the field defined by NON_FIELD_ERRORS.

csv_values

Values that are included in this Question’s csv export.

is_answer_valid(*, answer)[source]

Validates answer against ANSWER_TYPE_SCHEMA.

is_fully_editable

True if no Answer has been given for this Question.

read_only_fields

question_text, answer_type, image_port, required if this Question is fully editable, an empty list otherwise.

save(*args, **kwargs)[source]

Save the current instance. Override this in a subclass if you want to control the saving process.

The ‘force_insert’ and ‘force_update’ parameters can be used to insist that the “save” must be an SQL insert or update (or equivalent for non-SQL backends), respectively. Normally, they should not be set.

class grandchallenge.reader_studies.models.ReaderStudy(*args, **kwargs)[source]

Reader Study model.

A reader study is a tool that allows users to have a set of readers answer a set of questions on a set of images (cases).

exception DoesNotExist
exception MultipleObjectsReturned
add_editor(user)[source]

Adds user as an editor for this ReaderStudy.

add_ground_truth(*, data, user)[source]

Add ground truth answers provided by data for this ReaderStudy.

add_reader(user)[source]

Adds user as a reader for this ReaderStudy.

answerable_question_count

The number of answerable questions for this ReaderStudy.

answerable_questions

All questions for this ReaderStudy except those with answer type heading.

generate_hanging_list()[source]

Generates a new hanging list.

Each image in the ReaderStudy is assigned to the primary port of its own hanging.

get_hanging_list_images_for_user(*, user)[source]

Returns a shuffled list of the hanging list images for a particular user.

The shuffle is seeded with the users pk, and using RandomState from numpy guarantees that the ordering will be consistent across python/library versions. Returns the normal list if shuffle_hanging_list is False.

get_progress_for_user(user)[source]

Returns the percentage of completed hangings and questions for user.

hanging_image_names

Names for all images in the hanging list.

hanging_list_diff(provided=None)[source]

Returns the diff between the images added to the study and the images in the hanging list.

hanging_list_images

Substitutes the image name for the image detail api url for each image defined in the hanging list.

hanging_list_valid

Tests that all of the study images are included in the hanging list exactly once.

help_text

The cleaned help text from the markdown sources

image_groups

Names of the images as they are grouped in the hanging list.

is_editor(user)[source]

Checks if user is an editor for this ReaderStudy.

is_reader(user)[source]

Checks if user is a reader for this ReaderStudy.

is_valid

Returns True if the hanging list is valid and there are no duplicate image names in this ReaderStudy and False otherwise.

leaderboard

The leaderboard for this ReaderStudy.

non_unique_study_image_names

Returns all of the non-unique image names for this ReaderStudy.

remove_editor(user)[source]

Removes user as an editor for this ReaderStudy.

remove_reader(user)[source]

Removes user as a reader for this ReaderStudy.

save(*args, **kwargs)[source]

Save the current instance. Override this in a subclass if you want to control the saving process.

The ‘force_insert’ and ‘force_update’ parameters can be used to insist that the “save” must be an SQL insert or update (or equivalent for non-SQL backends), respectively. Normally, they should not be set.

score_for_user(user)[source]

Returns the average and total score for answers given by user.

scores_by_user

The average and total scores for this ReaderStudy grouped by user.

statistics

Statistics per question and case based on the total / average score.

study_image_names

Names for all images added to this ReaderStudy.

class grandchallenge.reader_studies.models.ReaderStudyPermissionRequest(*args, **kwargs)[source]

When a user wants to read a reader study, editors have the option of reviewing each user before accepting or rejecting them. This class records the needed info for that.

exception DoesNotExist
exception MultipleObjectsReturned
grandchallenge.reader_studies.models.delete_reader_study_groups_hook(*_, instance, using, **__)[source]

Deletes the related groups.

We use a signal rather than overriding delete() to catch usages of bulk_delete.