Resources are of paramount importance as they foster scientific advancement. These resources include, among others, datasets, ontologies, benchmarks, workflows, and software. Sharing them is key to ensuring reproducibility, allowing other researchers to compare results and methods, and exploring new lines of research, per the FAIR principles for scientific data management.
The ISWC 2023 Resources Track aims to promote the sharing of resources which support, enable, or utilize semantic web research. Resources include, but are not restricted to: datasets, ontologies/vocabularies, ontology design patterns, evaluation benchmarks or methods, software tools/services, APIs and software frameworks, workflows, crowdsourcing task designs, protocols, methodologies, and metrics, that have contributed or may contribute to the generation of novel scientific work in the Semantic Web. In particular, we encourage the sharing of such resources following best and well-established practices within the Semantic Web community. As such, this track calls for contributions that provide a concise and clear description of a resource and its usage.
Important Dates – All deadlines are 23:59 AoE (anywhere on Earth)
2 May 2023
Full Papers Due
9 May 2023
Objection & Response
13–16 June 2023
12 July 2023
Camera-ready Papers Due
31 July 2023
Resources of Interest
A typical Resource track paper has its focus set on reporting on resources that fall in one of the following categories:
- Datasets produced
- to support specific evaluation tasks (for instance labeled ground truth data)
- to support novel research methods;
- by novel algorithms;
- Ontologies, vocabularies and ontology design patterns, with a focus on describing the modelling process underlying their creation;
- Benchmarking activities focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems;
- Reusable research software, e.g., prototypes/services supporting a given research hypothesis and enabling specific data processing and engineering tasks;
- Community-shared software frameworks that can be extended or adapted to support scientific study and experimentation;
- Scientific and experimental workflows used and reused in practical studies;
- Crowdsourcing task designs that have been used and can be (re)used for building resources such as gold standards and the like;
- Protocols for conducting experiments and studies;
- Novel evaluation methodologies and metrics, and their demonstration in an experimental study.
Differentiation From the Other Tracks
We strongly recommend that prospective authors carefully check the calls of the other main tracks of the conference in order to identify the optimal track for their submission. Papers that propose new algorithms and architectures should continue to be submitted to the regular research track, whilst papers that describe the use of semantic web technologies in practical settings should be submitted to the In-Use track. When new reusable resources are produced during the process undertaken for achieving these results, such as datasets, ontologies, workflows, etc., they are suitable subject for a submission to the Resources Track.
The program committee will consider the quality of both the resource and the paper in its review process. Therefore, authors must ensure unfettered access to the resource both during the review process and after, by citing the resource at a permanent location. For example, data available in a repository such as FigShare, Zenodo, or a domain-specific repository; or software code being available in public code repository, such as GitHub or BitBucket or one’s institutional open data repository. Code releases should be properly deposited according to community best practices. In exceptional cases, when it is not possible to make the resource public, authors must provide anonymous access to the resource for the reviewers and briefly motivate why the resource cannot be made public. All resources should clearly disclose their license.
We welcome the submission of established resources, having a community using them (excluding the authors), and of new resources, which may not prove established reuse but have sufficient evidence and motivation for claiming potential adoption. In the first case, it is required to provide evidence, such as statistics about the adoption of the resource. In the second case, authors should defend the claim of potential adoption by providing evidence of discussion in fora, mailing lists, and the like.
All resources will be evaluated along the following review criteria:
- Does the resource break new ground?
- Does the resource fill an important gap?
- How does the resource advance the state of the art?
- Has the resource been compared to other existing resources (if any) of similar scope?
- Is the resource of interest to the Semantic Web community?
- Is the resource of interest to society in general?
- Will/has the resource have/had an impact, especially in supporting the adoption of Semantic Web technologies?
- Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively (for new resources), what is the resource’s potential for being (re)used; for example, based on the activity volume on discussion fora, mailing lists, issue trackers, support portal, etc?
- Is the resource easy to (re)use? For example, does it have high-quality documentation? Are there tutorials available?
- Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use? If it is specific, is there substantial demand.
- Is there potential for extensibility to meet future requirements?
- Does the resource include a clear explanation of how others use the data and software? Or (for new resources) how others are expected to use the data and software?
- Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
Design & Technical quality:
- Does the design of the resource follow resource-specific best practices?
- Did the authors perform an appropriate reuse or extension of suitable high-quality resources? For example, in the case of ontologies, authors might extend upper ontologies and/or reuse ontology design patterns.
- Is the resource suitable for solving the task at hand?
- Does the resource provide an appropriate description (both human- and machine-readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
- Mandatory: Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?
- Mandatory: Is there a canonical citation associated with the resource?
- Mandatory: Does the resource provide a licence specification? (See creativecommons.org, opensource.org for more information)
- Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
- Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as FigShare, Zenodo or GitHub?
- Is there a sustainability plan specified for the resource? Is there a plan for the medium and long-term maintenance of the resource?
- Does the resource adopt open standards, when applicable? Alternatively, does it have a good reason not to adopt standards?
Guidelines for reviewers are available here.
To ensure that reviewers and readers of published papers will easily find the mandatory availability information, please use the Resource Availability Statement Guide and suggested wording.
Regarding specific resource types, checklists of their quality attributes are available in a presentation. Both authors and reviewers may make use of them when assessing the quality of the particular resource.
- Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via EasyChair.
- Papers describing a resource must be in the range of 8 and 15 pages + references. Papers must describe the resource and focus on the sustainability and community surrounding the resource. Benchmark papers are expected to include evaluations and provide a detailed description of the experimental setting. Papers that exceed the page limit will be rejected without review.
- All submissions must be in English.
- Submissions must be either in PDF or HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. For HTML submission guidance, please see the HTML submission guide used for ISWC 2023.
- ISWC 2023 submissions for the resources track are not anonymous. We encourage embedding metadata in the PDF or HTML to provide a machine-readable link from the paper to the resource.
- In order to reduce workload on authors and reviewers, while also providing an opportunity for author feedback in exceptional cases, we are replacing the Rebuttal Phase with an opportunity for “Objection & Response”. This should only be used in two cases: 1) in order to highlight clear factual errors in reviews regarding the content of the submission; 2) in order to respond to explicit questions from reviewers. Any misuse of this phase will be ignored during the review process.
- Authors of accepted papers will be required to provide semantic annotations for the abstract of their submission, which will be made available on the conference web site. Details will be provided at the time of acceptance.
- Accepted papers will be distributed to conference attendees and also published by Springer in the conference proceedings, as part of the Lecture Notes in Computer Science series.
- At least one author of each accepted paper must register for the conference and present the paper. As in previous years, students will be able to apply for registration or travel support to attend the conference. Preference will be given to students that are first authors of papers accepted to the main conference or the doctoral consortium, followed by those who are first authors of papers accepted to ISWC workshops and the Poster & Demo session.
Prior Publication and Multiple Submissions
ISWC 2023 will not accept resource papers that, at the time of submission, are under review for or have already been published or accepted for publication in a journal, another conference, or another ISWC track. The conference organisers may share information on submissions with other venues to ensure that this rule is not violated.
Research Metadata and Comparisons
To facilitate clearly stating novelty to readers and peer-reviewers alike, findability of the paper if accepted, and trying to use knowledge graphs ourselves, you may add to the paper a so-called “ORKG comparison” with the Open Research Knowledge Graph (ORKG). Such an ORKG Comparison is a characterization of a submission by juxtaposing it with related resources, if there are any, and therewith highlighting the key difference(s) of your resource with related ones. More information on the background and how to create an ORKG comparison can be found here (including a how-to video). This can be done during the submission process – in which case a link to the comparison can be added to the submission for reviewers. This workflow describes the steps involved in the creation of such a comparison.
This addition to an ISWC paper submission is experimental and optional. It may not be relevant to your resource, and absence of such a comparison will not negatively affect the review of the paper.
Resources Track Chairs
Prof. Guilin Qi
School of Computer Science and Engineering, China
Universidad Politécnica de Madrid, Spain