diff --git a/docs/ant2/actionlist.html b/docs/ant2/actionlist.html new file mode 100644 index 000000000..67b2eaff9 --- /dev/null +++ b/docs/ant2/actionlist.html @@ -0,0 +1,414 @@ + +
+ + |
++ ++ This document will list a bunch of actions that will guide us in the evolution + of Ant1.x and provide a solid basis on which to launch Ant2.0. Feel free to add to + this list of actions as our vision of Ant2 solidifies. Associated with each action + is a list of victims who have "volunteered" to have a go at the action and a status. + The status just tells us where they are at while the victim column will tell us exactly + who is doing what. It is fine for a group of people to work on a single area. +
+
+
+
+++
++ ++ There has long been a recognition that it would be nice if ant supported some + notion of a virtual filesystem layer. This layer would allow you to treat + resources located and retrieved from different mechanisms in a uniform way. + For instance it would allow the copy task to copy from a http server, a cvs server, + a ftp server or the local filesystem using a uniform mechanism. So instead of + having separate tasks to operate on each different resource type we would use + just one task that plugged into multiple filesystems. +
++ When we are talking about a virtual filesystem or VFS, one of the concerns we must + address is how to "name" the resource. In most cases a URL or URI style access will + be sufficient but in other cases we may need to consider other options. So + "cvs://localhost:/home/cvs/jakarta-avalon/README.txt?version=1.1", + "ftp://some.server.com/dir/file.zip" and "file://C/WINDOWS/Desktop/MyFile.txt" + are all examples of referring to different resources. +
++ Another concern that must be addressed is capabilities of both the resources and + the filesystem. For instance it is possible to both read and write to resources + using the "file" protocol but only possible to write resources using "mailto". + The act of copying a file to a "mailto" url would actuall post the files as + resources while copying to a "file" would duplicate the resource somewhere on + the local filesystem. +
++ So we need to determine a list of capabilities. Some examples would be "read", + "write", "list" (can you list dirs), "type" (can you get mime type), + "access permissions" (can you tell if resource has permissions), + "modify permissions" (can you modify permissions) etc. Some of these capabilities + can be associated with the particular reosurces, while others may need to be + associated with a whole filesystem/protocol (ie there is no standard mechanism + to perform "list" on general "http" URLs). Thus a list of all these capabilities + and mapping to various protocols will need to be established. +
++ Next we need to determine if we are going to support the notion of "mounting" + URLs. For instance if we need to copy files from a FTP server do we allways + need to specify the full URL - no matter how convoluted it is (ie + "ftp://fred:secret@some.server.com:28763/home/fred/project2/dir/file.zip") + or can we mount this on a VFS and access it via that shorter url. ie We could + mount "ftp://fred:secret@some.server.com:28763/home/fred/" onto "vfs:/home" + and then just access the resources via "vfs:/home/project2/dir/file.zip". + This would make dealing with large urls easier and more uniform. +
++ So after we have decided what our options are we need to actually go about + implementing the solution. It may be the case that existing VFS solutions + could be reused with minor changes and thus we could save ourselves a lot of + work. Candidates would be the Netbeans VFS, Suns XFile API or other + available directory APIs (like JNDI). If none of them suit then we will need + to write our own layer. +
+
++ ++ Currently Ant has a mixture of tasks from various stages it's evolution, with different + authors and each utilizing different naming patterns. Some tasks use names such as + "src" and "dest" while others use "file" and "tofile". It would be preferrable if + consistent naming patterns were used. It is recomended that we come up with a "best + practices" document to document our recomended naming patterns. +
++ Before we can come up with such a document we need to identify common patterns through + out the tasks. Several tasks have the notion of transforming input from a "source" + to a "destination". So we should have consistent naming schemes for these attributes and + elements. Analysis of existing tasks will likely bring out other similar patterns. Once + we have identified and documented these similarities then we can establish conventions. +
+
++ ++ Currently our filesets allow us to select a set of files based on name patterns. + For instance we could create a set of all the files that end with ".java". However + there are cases when you wish to select files based on their other attributes, such as + if they are read only or if they are older than a specified date etc. +
++ The selector API is one such mechanism to do this. The selector API will allow you to + build file sets based on criteria other than name. Some possible criteria would be +
++
+- Is the file readable?
+- Is the file writeable?
+- What date was the file modified on?
+- What size is the file?
+- Does the contents contain the string "magic"?
++ If we end up supporting a VFS then we could expand the number of selectors + considerably. A mock representation that has been proposed before is the following. + Of course this is subject to change as soon as someone wants to tackle this action ;) +
++ <include> + <selector type="name" value="**/*.java"/> + <selector type="permission" value="r"/> + + <!-- could optionally be directory/or some other system specific features --> + <selector type="type" value="file"/> + <selector type="modify-time" + operation="greater-than" + value="29th Feb 2003"/> + </include> ++
++ ++ When you execute a task such as "javac" there is two types of dependency information + that is important to analyze before we determine if we need to recompile a file. Say + we are compiling
+Foo.java
, it may depend on theBar.java
+ file. We call this "structural" dependency information - the structure of the source file + determines what other files it depend upon. However there is also "environmental" + dependency information. For instance if theFoo.java
file was compiled with +debug="true"
last run and this time needs to be compiled with +debug="false"
then it is out of date and needs to be recompiled. We call this + "environmental" dependency information "coloring". ++ So we need to create an infrastructure that allows tasks to manage "coloring". So a task + should be able to add coloring information for each resource processed. When the task + comes to process the resource again it will detect if the coloring has changed and if it + has will force a recompile. +
++ An API for such a bean has yet to be established but an example API would be. +
++ColoringManager cm = ...; +cm.addColor( "debug", "true" ); +cm.addColor( "optimize", "false" ); +cm.setFileSet( myFileSet ); +File[] files = cm.getOutOfDate(); ++
++ ++ In the present ant, it is required that each task manage dependency separately. + This makes it a lot of work to implement even simple dependency checking. To this + day many of the core tasks do not implement it correctly. I am specifically + talking about "structural" dependency information. The main reason is that it is + painful to implement. +
++ Some tasks do no dependency checking and will recompile/transform/etc everytime. + Others may perform a simple dependency checking (ie if source file is newer than + destination file then recompile). Ideally a dependency system would actually + calculate the dependencies correctly. So we need to have some mechanism to determine + that
+foo.c
actually depends uponfoo.h
,bar.h
+ andbaz.h
. As this information is particular to each different task + we need to allow tasks to implement this behaviour. Possibly by supplying an interface + of the form; ++public interface DependencyGenerator +{ + File[] generateDependencies( File file ); +} +++ Generating the dependency information is a costly operation and thus we do not want to + be doing it everytime you run ant. We want to generate it on the initial build and then + persist somewhere. Everytime a file is out of date, it's dependency information would + be regenerated and stored in the dependency cache. Ideally this cache would also store the + above mentioned coloring information. So the entry for
+foo.c
may declare that + it is dependent uponfoo.h
,bar.h
andbaz.h
, aswell + as being compiled with -O2 flag. If any of the dependencies have changed or are out of date + then foo.c would need to be recompiled. ++ A possible API would be +
++DependencyManager dm = ...; +dm.setFileSet( myFileSet ); +dm.setDependencyCache( myDependencyCacheFile ); +File[] files = cm.getOutOfDate(); ++
++ ++ Exec and its related classes have currently evolved through several iterations and thus + it is not as cleanly designed and as reusable as it could be. Someone needs to pull apart + exec and analyze which parts can be turned into JavaBeans and decouple them from the Ant + infrastructure. Once that is done it will make these beans much easier to reuse from other + tasks without relying on gaining access to the other task instance. +
+
++ ++ Much like Exec should be decoupled from Ant runtime, so should classes to implement java + task for the same benefits. +
+
++ ++ Currently we have a few tasks that have multiple implementations. For instance Javac task + can actually call jikes, jvc, classic javac or modern javac. Similar things will be seen + with the jspc task and the cc task (if it ever gets written). We need to examine this + pattern and see if there is a way to generalize this and make it easier to write such tasks. +
+
++ ++ We have already decided that we are going to package Ant tasks in separate jars and + have some sort of descriptor that to describe the contents of the jar. However we have + not yet decided how we will break up the tasks. Do we break up the tasks up into + related tasks or into groups that ar elikely to be used together or what? A possible + breakdown would be +
++
+- jdk tasks: javac, javadoc, rmic etc
+- text tasks: regex replace, fixcrlf etc
+- unix tasks: chmod, etc
+- file tasks: copy, move, etc
+
++ ++ When we are copying files from one location to another it is currently possible + to rename them using a mapper. So we could rename
+Foo.java
to +Foo.java.bak
. On occasion it is useful to modify file attributes + other than its name in such operations. So we could copy the files to another + location and make them read-only in one operation. +
++ ++ This is partially related to the above action. Filters could be seen as a way + to modify the content attribute of a file during a copy/move. It would be + preferrable if filtering could be abstracted to use
+FilteredOutputStream
s + to perform the content modification. That way new Filter types could be constructed + and used during file copy (ie an example would be a Perl FilterOutputStream that + allowed you to use perl expressions to transform input). +
++ ++ When including fragments of XML we are currently forced to use relative paths. + However this is sometimes undesirable when a single fragment needs to be used + across several projects in several different locations. Instead we could use + a Catalog to name the fragment and then each developer would only need to install + the fragment once and it would be accessible from all the projects. +
+
++ + + \ No newline at end of file+ Look at the feasability of performing i18n on Ant runtime and core tasks. Look at + how much work it will be and how useful it would be. Look at utilizing i18n from + existing projects such as Avalon. +
+