ArDoCo (Architecture Documentation Consistency) is a framework to connect architecture documentation and models while identifying missing or deviating elements (inconsistencies). An element can be any representable item of the model like a software component.
The goal of this project is to connect architecture documentation and models with Traceability Link Recovery (TLR) while identifying missing or deviating elements (inconsistencies). An element can be any representable item of the model, like a component or a relation. To do so, we first create trace links and then make use of them and other information to identify inconsistencies.
ArDoCo is actively developed by researchers of the Modelling for Continuous Software Engineering (MCSE) group of KASTEL - Institute of Information Security and Dependability at the KIT.
To be able to execute the core algorithms from this repository, you can write own user interfaces that (should) use the ArDoCoRunner.
Future user interfaces like an enhanced GUI or a web interface are planned.
For more information about the setup or the architecture have a look on the Wiki. The docs are at some points deprecated, the general overview and setup should still hold.
To test the Core, you could use case studies and benchmarks provided in ..
<dependencies> <dependency> <groupId>io.github.ardoco.core</groupId> <artifactId>pipeline</artifactId> <!-- or any other subproject --> <version>VERSION</version> </dependency> </dependencies>
For snapshot releases, make sure to add the following repository
<repositories> <repository> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> <id>mavenSnapshot</id> <url>https://s01.oss.sonatype.org/content/repositories/snapshots</url> </repository> </repositories>
Text preprocessing works locally, but there is also the option to host a microservice for this. The benefit is that the models do not need to be loaded each time, saving some runtime (and local memory).
The microservice can be found at ArDoCo/StanfordCoreNLP-Provider-Service.
The microservice is secured with credentials and the usage of the microservice needs to be activated and the URL of the microservice configured. These settings can be provided to the execution via environment variables. To do so, set the following variables:
NLP_PROVIDER_SOURCE=microservice MICROSERVICE_URL=[microservice_url] SCNLP_SERVICE_USER=[your_username] SCNLP_SERVICE_PASSWORD=[your_password]
The first variable
NLP_PROVIDER_SOURCE=microservice activates the microservice usage.
The next three variables configure the connection, and you need to provide the configuration for your deployed microservice.
The initial version of this project is based on the master thesis Linking Software Architecture Documentation and Models.
This work was supported by funding from the topic Engineering Secure Systems of the Helmholtz Association (HGF) and by KASTEL Security Research Labs (46.23.01).