Welcome to the website of the 23rd International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2022).
VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas. VMCAI 2022 will be the 23rd edition in the series.
VMCAI will take place during January 16-18, 2022.
Call for Papers
VMCAI 2022 is the 23rd International Conference on Verification, Model Checking, and Abstract Interpretation. The conference will be held during January 16-18, 2022. VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas.
The program of VMCAI 2022 will consist of refereed research papers as well as invited lectures and tutorials. Research contributions can report new results as well as experimental evaluations and comparisons of existing techniques.
Topics include, but are not limited to:
- Program Verification
- Model Checking
- Abstract Interpretation
- Abstract Domains
- Program Synthesis
- Static Analysis
- Type Systems
- Deductive Methods
- Program Logics
- First-Order Theories
- Decision Procedures
- Horn Clause Solving
- Program Certification
- Separation Logic
- Probabilistic Programming and Analysis
- Error Diagnosis
- Detection of Bugs and Security Vulnerabilities
- Program Transformations
- Hybrid and Cyber-physical Systems
- Concurrent and distributed Systems
- Analysis of numerical properties
- Analysis of smart contracts
- Analysis of neural networks
- Case Studies on all of the above topics
Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, and object-oriented programming.
September 2nd, 2021 September 9th, 2021: Paper submission
October 7th, 2021: Notification
November 15th, 2021: Camera-ready version due
Submissions are required to follow Springer’s LNCS format. The page limit depends on the paper’s category (see below). In each category, additional material beyond the page limit may be placed in a clearly marked appendix, to be read at the discretion of the reviewers and to be omitted in the final version. Formatting style files and further guidelines for formatting can be found at the Springer website. Submission is via EasyChair.
Submissions will undergo a single-blind review process. Accepted papers will be published in Springer’s Lecture Notes in Computer Science series. There will be three categories of papers: regular papers, tool papers and case studies. Papers in each category have a different page limit and will be evaluated differently.
Regular papers clearly identify and justify an advance to the field of verification, abstract interpretation, or model checking. Where applicable, they are supported by experimental validation. Regular papers are restricted to 20 pages in LNCS format, not counting references.
Tool papers present a new tool, a new tool component, or novel extensions to an existing tool. They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool’s functionality, discuss the tool’s practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities. Authors are strongly encouraged to make their tools publicly available and submit an artifact. Tool papers are restricted to 12 pages in LNCS format, not counting references.
Case studies are expected to describe the use of verification, model checking, and abstract interpretation techniques in new application domains or industrial settings. Papers in this category do not necessarily need to present original research results but are expected to contain novel applications of formal methods techniques as well as an evaluation of these techniques in the chosen application domain. Such papers are encouraged to discuss the unique challenges of transferring research ideas to a real-world setting and reflect on any lessons learned from this technology transfer experience. Case study papers are restricted to 20 pages in LNCS format, not counting references. (Shorter case study papers are also welcome.)
Call for Artifacts
VMCAI 2022 makes available the option to submit an artifact along with a paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. For some papers, these artifacts are as important as the paper itself because they provide crucial evidence for the quality of the results. The goal of artifact evaluation is twofold. On the one hand, we want to encourage authors to provide more substantial evidence to their papers and to reward authors who create artifacts. On the other hand, we want to simplify the independent replication of results presented in the paper and to ease future comparison with existing approaches. Artifacts of interest include (but are not limited to):
- Software, Tools, or Frameworks
- Data sets
- Test suites
- Machine checkable proofs
- Any combination of them
- Any other artifact described in the paper
Artifact submission is optional. However, we highly encourage all authors to also submit an artifact. A successfully evaluated artifact can increase your chance of being accepted since the evaluation result of your artifact is taken into account during paper reviewing. Additionally, badges shown on the title page of the corresponding paper give you credit for good artifact submissions. We award one of three types of badges. For artifacts that are successfully evaluated by the artifact evaluation committee we grant the available badge. Artifacts that are publically available under a DOI receive an availability badge. Authors may use all granted badges on the title page of the respective paper.
The artifact evaluation will be done in parallel with the evaluation of the submitted paper. The artifacts submission deadline is 1 week after the paper submission.
|Artifact submission opens|| |
|Artifact submission|| |
|Artifact test phase notification||23rd Sept 2021|
|Artifact clarification period||24th-28th Sept 2021|
|Artifact notification||17th Oct 2021|
All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.
- Is the artifact consistent with the paper and the claims made by the paper?
- Are the results of the paper replicable through the artifact?
- Is the artifact complete, i.e., how many of the results of the paper are replicable?
- Is the artifact well-documented?
- Is the artifact easy to use?
The artifact evaluation is performed in the following two phases.
- In the test phase, reviewers check if the artifact is functional, i.e., they look for setup problems (e.g., corrupted, missing files, crashes on simple examples, etc.). If any problems are detected, the authors are informed of the outcome and asked for clarification. The authors will get 3 days to respond to the reviews in case problems are encountered.
- In the assessment phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact w.r.t the questions detailed above.
An artifact submission should consist of
- an abstract that summarizes the artifact and explains its relation to the paper including:
- a URL from which a .zip or .tar.gz archive file containing the artifact can be downloaded - we encourage you to provide a DOI
- the SHA256 checksum of the archive file
- a .pdf file of the submitted paper contained within the archive file.
The artifact evaluation chairs will download the archive file and distribute it to the reviewers. Please also look at the Artifact Packaging Guidelines section for detailed information about the contents of the submission. The abstract (as a txt or pdf) should be submitted to submitted to EasyChair. The abstract should include a description of the artifact, the URL of the download link (you should upload your vm to a hosting service of your choice - easychair will not accept vm uploads), as well as the SHA256 checksum of your archive file.
We need the checksum to ensure the integrity of your artifact. You can generate the checksum using the following command-line tools.
CertUtil -hashfile <file> SHA256
shasum -a 256 <file>
If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.
There are two acceptable ways to submit your artifact. You may either package the artifact as an archive file and write their instructions such that the artifact evaluation committee can evaluate the artifact within a virtual machine provided by us. In this case, only submit the required files to replicate your results in the provided virtual machine. If you submit in this way, you would not submit a virtual machine image in the archive file. AEC members will copy your archive file into the provided virtual machine.
The second option is to modify the given VM and reupload the VM to a hosting platform of your choice. We will check the hash of the image, then load the VM to test your artifact. In this case, a README should be contained in the home directory of the VM.
We recommend preparing your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially to replicate your results. For example, provide easy-to-use scripts and a detailed README document.
An initial version of the virtual machine is available here.The user name and password are
vmcai. If you have any questions regarding the VM or in case you think the VM is improper for evaluation of your artifact, please contact the artifact evaluation chair.
Your virtual machine must contain the following elements.
- The main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper. ◦ The review will be singly blind. Please make sure that you do not (accidentally) learn the identify of the reviewers (e.g., through analytics, logging).
- A license file. Your license needs to allow the artifact evaluation chairs to download and distribute the artifact to the artifact evaluation committee members and the artifact evaluation committee members must be allowed to evaluate the artifact, e.g., use, execute, and modify the artifact for the purpose of artifact evaluation.
- A README text file that introduces the artifact to the user and guides the user through replication of your results. Ideally, it should describe the structure and content of your artifact. It should also describe the steps to set up your artifact within the VM. To simplify the reviewing process, we recommend providing an installation script (if necessary). We would appreciate it if you would support the reviewers not only for the main review phase but also for the testing phase. To this end, it would be helpful if you would provide instructions that allow installation and rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible.
Document in detail how to replicate your results of the paper:
- Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.
- Describe in detail how to replicate the results in the paper, especially describe the steps that need to be performed to replicate the results in the paper. To simplify the reviewing process, we recommend providing evaluation scripts (where applicable).
- Precisely state the resource requirements (RAM, number of cores, CPU frequency, etc.), which you used to test your artifact. In most cases, your resource requirements should be modest and allow replication of results even on laptops. If your tool demands a more specialized resource requirement than would be appropriate for a laptop, make a note of this in your README.
- Please provide for each task/step of the replication (an estimate) how long it will take to perform it or how long it took for you and what exact machine(s) you used.
- For tasks that require a large amount of resources (hardware or time), we recommend to provide a possibility to replicate a subset of the results with reasonably modest resource and time limits, e.g., within 8 hours on a reasonable personal computer. In this case, please also include a script to replicate only a subset of the results. If this is not possible, please contact the artifact evaluation chairs early, but no later than before submission.
The artifact evaluation committee uses the submitted artifact only for the artifact evaluation. It may not publicize the artifact or any parts of it during or after completing evaluation. Artifacts and all associated data will be deleted at the end of the evaluation process. We encourage the authors of artifacts to make their artifacts also permanently available, e.g., on Zenodo or figshare, and refer to them in their papers via a DOI. All artifacts for which a DOI exists that is known to the artifact evaluation committee are granted the availability badge.