DACA is made of different parts: the code analysis tools, the infrastructure that runs those tools, and the results post-processing (this simple web interface at the moment.) Therefore, you can help on one or more of the individual parts. Thanks in advance! Important: after writting some of this page, it sounds too bureaucratic. Read it and take it as a guideline, if you start to feel bored skip it and let your intuition do the rest, the community will guide you. You can also contribute to this document.
The general, feedback, non-development, list is
firstname.lastname@example.org and the development mailing list
Don't forget to subscribe! if you don't, make sure you add a note to your email saying you are not subscribed and you would like a copy of responses to be sent to your address.
Raphael Geissert's evaluation process is more or less the following:
Tools should not generate many false positives, unless the goal of the
tool is to be as broad that false positives are to be expected. They
should require as minimum setup as possible (being able to automate
such setup would count as an OK.) No user interaction can be
required. The reports should be fairly easy to understand, should be
machine readable (working with upstream to do that is an OK too. Direct
HTML output is usually not OK,) and should not require fancy tools (at
least not those that require user interaction) to read them. Inserting
the report directly on the source code is not acceptable either.
Although the tool may not be packaged for Debian, there should be an
ongoing effort to do it (somebody needs to be the first responder for
issues with the tool.) Executing the to-be-checked code is also
discouraged (but certainly not forbidden, since some tools may only
work that way.) External network interaction is not allowed. Finally,
the build-time and run-time requirements should be relatively small.
FIXME: The above paragraph needs proper formatting.
After a tool has been partially or completely evaluated, a script that takes care of launching the tool and storing the report in a file and the integration with the web interface need to be written. Enviroment requirements need to be documented, along with a brief (as short as a yes or a no) description of how the tool passes the requirements of the evaluation. Once it's all written, the tool should then be proposed to the development mailing list (in fact, it is better if the mailing list is contacted beforehand) to give community members the opportunity to review and comment. Code changes will be committed after they are accepted and the tool run whenever time (CPU and human) permits.
This part should be done in collaboration with upstream, and unless there's a joint community effort the DACA mailing lists should not be involved. Points to consider: help fix bugs, add more checks/features and improve current ones, improve the results reporting, automate it as much as possible.
To be documented. There have been some discussions about using a general grid solution and adapting some debian.org services (DACA included) to use it.
To be documented. The current web interface is very very simple and it has no real knowledge of packages, distributions, versions, etc. The post-processing stuff need to take the results from the store (currently files containing the raw results as generated by the tools) and generate whatever they want (data for the web interface, a relational database, more data to be processed like counts and other stuff for statistics and whatever you want.)
Only contributors who are willing to make a commitment to the
project and are trusted by the DACA community are allowed as project
members. This is because direct access to the main git repository is
given to all of the project members on Alioth.
Note: anyone can contribute code by sending patches to the development mailing list and reviewing them. After some time people may start to trust you and whenever that happens and you believe that having direct access to the git repositories would help you, you may apply. Also, these are not really rules and much less written in stone. Different policies may apply on a case by case basis and this text may not reflect the current practices.
There's a main git repository where code that should be in
production is maintained (n.b. no commit is made live in the production
systems without human intervention.) Branches of this repository are used
for major rework that is relatively functional; they should be created
only after agreement. Only authorised people may commit changes to the
main repository (having you accepted on the Alioth project doesn't
General development should be done on per-user repositories (anyone with an alioth account, even guests, can mkdir ~/public_git, git clone --bare git://...) Members of the Alioth project can merge their changes into the main repository once they are ready for production (i.e. they've been tested) and direct commits to it are allowed for small changes (typos, etc) or large changes to branches (see above for what the branches are for.)
All other members should commit to their own git repository (hosted on alioth or anywhere else) and submit their patches (either the mbox generated by git or a link to the commits if the changes are large) to the mailing list for acceptance.
More results and tools will be added as (CPU and human) time permits. Are you interested? email email@example.com