SFLC Legal Summit For Software Freedom, New York, 12 October 2007

by Rowan Wilson on 15 October 2007 , last updated


On 12 October, 2007, the Software Freedom Law Center (SFLC) held their first summit at Columbia University School of Law in Manhattan. Established in early 2005, the SFLC provides legal help to developers of free and open source software, often at no cost. They have helped resolve conflicts within the community, as in the case of the recent lively debate over the origins and licensing of various open source drivers for wireless LAN hardware. They have also launched legal action against a set-top box manufacturer - Monsoon Multimedia - whom they allege have used the free software Busybox in one of their hardware products without abiding by Busybox’s governing licence, the GPL. After an ugly spat over the re-licensing and ownership of driver code for Atheros wireless networking hardware, the SFLC conducted an exhaustive audit of the code to establish precisely who owned it. The summit introduced the work of the SFLC and explained some more about the work that they do already and intend to undertake.

The SFLC is, of course, Eben Moglen’s baby. Moglen co-drafted versions 2 and 3 of the GNU General Public License with Richard Stallman, and thus he is closely associated in the public mind with the Free Software Foundation’s (FSF) view of the nature of ‘software freedom’. In practice, the FSF and the Open Source Initiative (OSI) have similar definitions for free software and open source software, respectively. Both the FSF and OSI accept a definition of ‘software freedom’ that includes so-called ‘permissive’ licences that do not mandate that adaptations of the code, or indeed the code itself, must be distributed under the licence with which it is received. While all sides of the discussion agree on the general aims of their licensing efforts, they disagree on the optimum method of achieving them. Thus the FSF insist that compulsion is essential while the OSI argue that - while compulsion is a useful tool - permissive approaches have their value as well. Behind this utilitarian disagreement lies an ethical difference. The FSF sees aiding the proliferation of free software as a means of preserving personal freedom in a world where control of technology is increasingly a political tool. The OSI, on the other hand, is simply interested in making software itself better, which perhaps explains their willingness to accept various potential licensing routes to that end.

The question in my mind, then, on arriving at this event was: where would the SFLC position itself in this debate? The use of the word ‘Freedom’ in their title, as well as Moglen’s CV, indicated an affiliation with the FSF end of the argument. Certainly in the angry mailing list debates over the Atheros code, many painted the SFLC as a creature of Stallman’s, working to draw as much code as possible into the GPL-protected fold. Was it a stealth wing of the FSF, formed of lawyers? I was determined to find out.

This report will describe each session individually as follows:

  • Ensuring software freedom - Daniel Ravicher
  • Copyrights - James Vasile
  • Reverse engineering and clean room development - Matthew Norwood and Aaron Williamson
  • Organisational issues - Karen Sandler
  • Patents - Richard Fontana
  • Internationalisation - Eben Moglen

Ensuring software freedom

Ravicher, who is the Legal Director of the SFLC, began the event with some densely metaphorical remarks on the nature of freedom and its progress in the world of software. He likened the development of the software freedom movement to the growth of a child into adulthood. Just as a child begins to define sets of rules that will govern their future progress, the software freedom movement had codified their rules into agendas and licences. Ravicher cited Eric Raymond’s ‘The Cathedral and the Bazaar’ as an agenda for software freedom. Sometimes there could be too much freedom, he added, and gave the proliferation of FLOSS licences as an example of this.

Threats to freedom were comparable to childhood infections, and the growing body developed antibodies to counter these, just as the software freedom movement developed new mechanisms and rules to counter emerging threats.

Now, Ravicher said, we had reached the end of this developmental process, and the software freedom movement had achieved maturity. The SFLC’s filing of suit against Monsoon Multimedia for their alleged Busybox GPL violation was a mark of that maturity; having defined the ‘adult’ ruleset that protects software freedom, there must be consistency in its application, and the SFLC aimed to provide this.

Ravicher ended by thanking all authors of free and open source software for their work and their commitment to the public good.

Ravicher’s remarks were general and notable in the main for their rich figurativeness rather than any great detail. However, a few interesting turns of phrase and examples did intrigue me. Early on, he referred to a section of the community who supported software freedom as a result of ‘quasi-religious beliefs’. Taken along with the approving reference to Raymond’s essay - which stresses the utilitarian value of open development and provided the impetus for the formation of the OSI - this phrase seemed to me to indicate a more holistic approach to the FLOSS community than some have so far attributed to the SFLC. As one might perhaps expect from a lawyer, it expresses a realist’s view of the breadth and composition of the community.


Ravicher quickly handed over to James Vasile, who admitted that he had had difficulty working out what kind of people would attend an event like this. As a result, he had planned two talks, and asked us to vote for the one we wanted. On offer were ‘Managing Copyright in an Open Source Project’ and ‘Licensing and Communities’. As an audience we were unable to give a clear preference, so Vasile chose the former, to some groans.

Personally, I was glad, as this had been my choice. In OSS Watch we frequently advise projects on the issue of copyright management, and although some of the issues would differ due to regional variations in law, it would be interesting to hear the SFLC’s take on the issue.

Vasile began by explaining why a project should want to manage their copyright. Being able to provide an auditable set of paperwork that establishes the clear right of the project to distribute their code is invaluable if problems arise in the future. Projects starting out must ask themselves, Vasile stressed, whether they will attempt to take ownership of all the copyright in a project’s contributions or merely accept licences to material from contributors. In general, the SFLC advises projects to opt for the former route, and have contributors assign the ownership of their copyright in contributions to a single central person or entity. Trying to achieve this kind of centralisation after a project has begun is close to impossible, Vasile stressed, and thus settling the ownerhip model should be one of the first things that a project discusses and implements. Having a single owner of the code means that, down the road, certain activities become significantly easier. Relicensing the code can be achieved without hunting down all contributors, and enforcing the licence against violators is considerably simplified.

Although centralisation does mitigate some problems, achieving it is problematic. After deciding on a centralised model, questions must be asked of each new contributor to establish that they can actually assign the contributions that they make. A project should find out as much as possible about the contributor’s status, including:

  • their home country
  • their employment
  • whether they know of any potential copyright encumbrance on material they create
  • whether they know of any potential problems that might proceed from their exposure to patented code For contributors who code professionally, it can be difficult to establish which parts of a contributor’s output are their own and which are their employer’s. In these circumstances, it is desirable to get an ‘employer waiver’, in which the employer overtly disclaims ownership of any code their employee contributes to the project.


A member of the audience then asked, in the light of all these requirements to gain assent from contributors and their employers, what constitutes a record of agreement? Should these documents all be on paper, signed and posted, or are electronic surrogates legally sufficient? Slightly side-stepping the substance of the question, Vasile acknowledged that developers are often fiercely paper-averse and that therefore the only realistic answer was to settle for the best form of assent you can get. Yes, signed paper was best, with faxes second and emails somewhere behind both. Setting such high standards of documentation could be a barrier to contribution, however, so sensible compromises must be made.

Vasile estimated that, on average, the process of setting and checking a contributor questionnaire should take about a week, after which the contributor could work on the project uninterrupted until they next changed jobs.

Another questioner asked whether Vasile would recommend registering copyright - an issue that does not arise here in the UK, where copyright comes into being automatically when code is written. Vasile’s answer was interesting to me nevertheless, and reflected again the separation between the SFLC and the FSF. He advised that it was worthwhile for US projects to register copyright if they had decided that they were going to pursue violators of their licence. If not, it was a pointless task. This was another discussion that Vasile stresses projects should have early on: were they ‘rabid idealists’ who would chase down and take action against misusers of their code, or were they likely to just shrug and allow it on the basis that the main aim was just to ‘get the code out there’? To me, Vasile’s distinction had echoes of the GPL vs permissive licence debate, and he seemed to implicitly accept both decisions as legitimate within the free software community.

Summing up, Vasile restated the arguments for centralised ownership of copyright and good documentation, adding a further advantage - better appeal to ‘conservative organisations’. Business, particularly big business, is far more likely to use, distribute and contribute to a free or open source project if they can see a paper trail assuring them that the copyright status of the code is settled and legitimate.

In response to a further question on joint copyright ownership, Vasile explained that the kind of assignment favoured by the SFLC was in fact an ‘assignment plus licence back’ , meaning that in exchange for the assignment the contributor receives a licence from the project permitting them to act as though they still owned the copyright. All they lose is the ability to exclusively license it.

Reverse engineering and clean room development

Next up was a two-part presentation by Aaron Williamson and Matthew Norwood. Williamson began by explaining that it was part of his job to try to persuade developers to use an inefficient development methodology - clean room design. This process was useful under various circumstances - for example, when some code needed to be reimplemented because it has been contributed without the consent of the owner or when implementing functionality that already exists in a proprietary application. The aim of the process is to create a documented coding process that is demonstrably free of unauthorised copyright material.

A team for undertaking clean room design consists of three parts:

  • The clean room team - responsible for implementing the specification. The clean room team has no access to the copyright materials that are being excised or reimplemented.
  • The specification team - responsible for drawing up the specification document. The specification team may have access to the copyright materials that are being excised or reimplemented.
  • The liaison team - responsible for mediating and recording all communications between the other two teams, blocking any potentially contaminating copyright material from entering the clean room.

Aaronson then went into the US legal standard for infringement of copyright in computer software, and the snappily titled ‘Abstraction, Filtration, Comparison’ test. While the details he gave were specific to American law, it seems worthwhile to sum them up. After all, software released under an open source licence here in the UK is automatically able to be distributed in all global jurisdictions.

Establishing whether infringement has taken place begins by finding if copying has taken place. Assuming that there is no direct evidence of copying and the alleged infringer does not admit copying, this becomes an examination of circumstantial evidence. To make a reasonable inference that copying has taken place, there needs to have been demonstrable access to the allegedly infringed code, and a close degree of similarity in the resulting code. It is for this reason that clean room developers must avoid exposure to copyright code in the area in which they work. Code leaks, such as the 2004 internet posting of sections from the Microsoft Windows NT source, are a prime source of risk to potential clean room developers. Having read the code, the developer is ‘contaminated’ and their subsequent work will always carry a suspicion of derivation from the leaked code.

This unpleasant reality is made worse by the fact that ‘similarity’ in this sense need not be similarity at the line-by-line level of the source files. In fact, these can differ completely and the code they constitute still be infringing if its higher module-level structure is similar to that of the allegedly infringed code. It is this potential for structural derivation that leads to the US law test known as ‘Abstraction, Filtration, Comparison’. When determining if structural copying has taken place, a US court will first produce a series of models of the functionality of the allegedly infringed program, starting with the most general. This leads to three views of the program: a general summation of its functionality, an object map of its structure and a listing of its complete source. These items are then filtered to remove anything that cannot be infringing, such as structure or code that is dictated solely by efficiency considerations, structure or code that is dictated by external factors such as industry standards and norms of practice, and finally structure or code which is in the public domain. After this filtration, what is left is the material that is original and whose copying would represent infringement. It is this material that is compared with the corresponding structures in the allegedly infringing program to determine if it is in fact infringing.

A member of the audience asked about decompilation as a strategy for the specification team. Aaronson said that while there had been some legal precedent in the US for the legitimacy of decompilation (Sega vs Accolade), this was increasingly irrelevant as End User Licence Agreements (EULAs) were altered to make users agree to not decompile, whether they were legally entitled to or not.

At this point, Aaronson’s colleague Matthew Norwood took over. These days, Norwood explained, clean room development was most frequently used in developing drivers for hardware or interoperability with networking software. He then went into some detail about what happens if a ‘forensic’ examination of a development process is called for, perhaps as a result of accusations of infringement. Lawyers will appear on the scene and demand to see versioning commit logs, commit comments, paperwork covering assignment in and licensing in of code and disclaimers from contributors’ employers. If the development team used a clean room methodology, they will also be interested in all the communications between the specification team and the clean room team, as well as system and network access logs of all participants. The employment history of the developers will also come into focus, and any attendant risks of exposure to contaminating copyright material. The materials used by the specification team would also be examined. Decompilations of the code to be re-implemented, even if not made impractical by EULAs, could lead to some small infringements in the form of linguistic strings copied into the ‘clean’ code via the specification or communication with the developers. On the whole, Norwood advised, steer clear of decompilation as a strategy. Using the outputs of black box testing and observations of network interactions was a safer strategy, while the safest strategy of all was to limit oneself to studying only the software documentation.

Norwood went on to point out that all this effort was designed to eliminate copyright problems. Even after carefully adminstering a clean room development regime, the resulting code could easily end up infringing someone’s patent, or - if named thougtlessly - someone’s trademark.


Some questioners from the floor raised the issue of the effectiveness of EULAs, and the legal effect of acquiring software second-hand or examining a copy that did not in fact belong to you. No consensus was reached on how far these strategies might protect one under US law. A questioner from Italy stated that Italian law acknowledged the legitimacy of software bought second-hand. Another suggested that therefore www.ebay.it might be a good place for reverse engineers to bookmark. It was stated that in general the EU had more stringent restrictions on reverse engineering than the US.

A further questioner raised an interesting point about public testing: what might the legal effect be of incorporating changes into clean-room-developed software that are prompted by public user feedback. Could it potentially contaminate the painstakingly achieved result of the process? Norwood acknowledged that yes, it could do so. Every time new copyright is added, the software is potentially contaminated. All that a project can do is take care in accepting external materials, re-assess the software regularly and if necessary start the clean-room cycle again.

Finally a questioner raised Microsoft’s Shared Source Programme as an example of a source of potential contamination. Could it even be a tactic to make open source development more risky? Norwood avoided confirming this speculation, but noted that there are always means to find ‘clean’ programmers. Earlier he had jokingly said that the ideal clean room developer was ‘sprung from the sea’. Now, more helpfully, he suggested that retraining developers from entirely distinct areas of endeavour could be a fruitful approach, or even training your developers from scratch.

Organisational issues

After a short but much-needed break, Karen Sandler took the stage to talk about organisational structures for free and open source projects, and to introduce the SFLC’s own not-for-profit umbrella organisation the Software Freedom Conservancy. Traditionally, Sandler pointed out, open source projects are thought of as a social group of individuals with a software-related goal. Things get complicated for such groups quickly, however. Sandler gave the example of organising a conference around a free and open source project. One of the members of the project takes on the task, books a venue and orders a band to come and play in the evening. If the band has an accident on the way to the event, the individual doing the organisation could be liable themselves. Similarly with donations: without a legal entity to accept the donations, they will go to an individual, and the individual will be taxed on them. According to Sandler, many payments from Google in association with their ‘Summer of Code’ programme (in which Google funds a student to work on a free and open source project while making a small donation to the project itself) go uncollected because the projects have no organisation to accept them. Forming a corporate legal entity to represent a project can also help when contracting with external commercial bodies, who might otherwise be reticent. For most projects, the obvious choice of entity was a not-for-profit organisation, frequently described as a ‘501(c)(3)’ due to its definition appearing in that section of the United States Internal Revenue Code (these organisations correspond closely to charities in the UK). As with UK charities, donations to these bodies are not taxable.

‘Umbrella’ organisations

Despite the advantages of forming a not-for-profit 501(c)(3) to represent one’s free or open source project, there are considerable overheads. To maintain the status, considerable amounts of paperwork need to be done, regular meetings must be held and official positions must be filled. In the US, different states have different regulations about how these requirements are met, with online meeting acceptable in some but not in others. Officials need to provide signatures for corporate paperwork, but these usually need to be traditional paper signatures, with cryptographic signatures unacceptable. The informal nature of the collaboration within many free and open source projects also means that a not-for-profit status, once achieved, can be precarious. If one of your officers is ousted or leaves in a huff, they must be replaced immediately or the status will lapse. All in all, forming a not-for-profit organisation brings many advantages at the cost of much organisational overhead. Thus, Sandler explained, you see the springing up of many not-for-profit ‘umbrella’ organisations, which handle the corporate overhead at a level above the individual projects. The Apache Software Foundation, Software in the Public Interest and the Free Software Foundation itself were examples of this kind of arrangement. To this list, the SFLC had added its own organisation, the Software Freedom Conservancy, of which Sandler herself is the Secretary. The umbrella organisation can play an important advisory role, as well as helping with organisational issues, encouraging projects to think through worst-case scenarios and settle their copyright ownership model. A further advantage of this model was that, because all the money is held at a level above the projects, if an indivudal project died off, its cash could easily be recycled to benefit the other projects.

Of course, Sandler warned, it was not all positive. Umbrella organisations, by collecting together smaller projects, also aggregate the risk of infringement action. While any one project might not be worth suing, a large umbrella organisation could potentially be a worthwhile target. Of course, when an individual project within the organisation became sufficiently large, it could always ‘graduate’ out and become a separate entity, taking its risk with it.

Questions from the floor centred on the applicability of various activities - like merchandising - to the not-for-profit status. Sandler explained that it was perfectly possible to run a not-for-profit organisation whose purpose was the maintenance of the code base while running a for-profit business alongside, selling services, support and hats.


I was extremely interested to hear the SFLC’s take on the software patent issue. Moglen and Stallman’s GPL v3 famously states in its preamble that ‘every program is threatened constantly by software patents’, a formulation that has been ridiculed by some as sounding twitchy and paranoid. Richard Fontana stepped up to explain how the SFLC dealt with the issue.

Fontana began by saying that he is sometimes asked by free software developers whether they ought to be applying for software patents themselves, as a kind of defensive measure. His answer was ‘no’. Patents were in many ways antithetical to the ethos and the copyright licensing practices of free and open source software. Their conception was conditioned by a C19th view of the development of technology. The phrase ‘software patents’ was, Fontana said, difficult to define, and tended to be used almost exclusively by those who opposed the concept. The SFLC believed that the idea of the patentability of software was something that should still be debated. Fontana sketched out a brief history of the US courts’ attitude towards the efficacy of patents, describing it as cyclic. The early part of the C20th had seen courts generally unwilling to take strong prohibitive action against infringers, while the latter half of the C20th had seen the opposite, with a generally pro-patent approach taken in decisions. This era, Fontana said, seemed to be in recession now, with the Supreme Court increasingly willing to review the pro-patent decisions of lower courts. In specific reference to software patents, or computer-implemented inventions as their proponents tend to like to call them, the last 30 years in the US had seen a gradual journey from unpatentability to patentability, although in fact even in the 1960s some patents of software inventions had been granted by the United States Patent and Trademark Office (USPTO). Fontana then briefly covered this journey, beginning with the assumption that software was essentially a form of mechanised ‘mental steps’ and therefore unpatentable (on the grounds that thought itself was unownable) through to the 1981 Diamond vs Diehr case, where the Supreme Court upheld a software-implemented patent. From that point, the lower courts assumed that software patents were essentially fine and began to uphold them regularly. Bare algorithms still remained problematic, but crafty drafting - reciting hardware components and citing tangible results - was usually able to dress these up as patentable.

The SFLC, Fontana reiterated, does not see this as a completed dialogue. In the recent Supreme Court case Microsoft vs AT&T, Eben Moglen entered an amicus brief (an argument from an interested party who is not directly involved with the case) arguing that Microsoft should not be held liable for violating AT&T’s software patent because software patents should not be granted.

FOSS or patent?

Fontana then drew an interesting analogy between free and open source software and the rise of the software patent. They had, he said, come into existence and flourished over almost exactly the same time frame. One of the main thrusts of any justification of the patent system is its ability to foster and encourage innovation through public disclosure of the particulars of new inventions. Proponents of free software make the same claims for it. Having recognised this similarity in aims if not methods, Fontana then listed some of the reasons that all members of the free and open source software community tend to hate software patents. Developers, he said, tend to view software not as an invention or as a series of inventions but as something more akin to an artistic work. Thus copyright - with its literary and artistic associations - seems to them to be a more appropriate form of right to proceed from their creations. One of the arguments for the patent system is that - even if a competitive manufacturer cannot or does not wish to license a patent - further innovation can occur through the act of trying to ‘invent around’ an existing patent. Developers reply that - at least in the case of software patents - the claims in the patents are so incomprehensible that it is impossible to know if one is infringing or avoiding them. Acknowledging that the developers were often right about this, Fontana went on to say that this frequent failure of software patents to clearly and definitely delineate their claimed invention was an important potential weakness. To be legitimate, a US patent must be non-obvious, novel and definite. This last requirement means that a patent that can be shown to be too vague in its specifications must be invalidated. Community efforts to invalidate problematic patents tended to focus on demonstrating lack of novelty or obviousness via prior art, while demonstrating indefiniteness could be at least as useful a tool. (It’s interesting to note that in the current software patent infringement case between Blackboard and Desire2Learn, the latter changed its argument against infringement to include indefiniteness at the last minute, after Moglen had publicly stated the SFLC’s interest in the case.)

Free and open source software developers tended to see the granting of software patents as a kind of theft from a common store of knowledge, Fontana continued. The USPTO was seen as a co-culprit, granting patents despite their lack of a prior art database for software (a result of its relatively recent transition into patentability). Until recently, Fontana pointed out, the USPTO had refused to employ computer science graduates as examiners.

So what did the SFLC do? Free and open source software developers frequently approach them in trepidatious mood over their potential to get sued for software patent infringement. In reality, Fontana said, the individual developers were probably worrying far too much - after all, very few of them had assets that would warrant the legal action. Nevertheless, their concern for their own potential liability acted as a useful surrogate for the real risk to the community, which is that large-scale commercial distributors of free and open source software, and their customers, could be sued as a result of infringing code written by the developers. Where a patent appears to be invalid, the SFLC’s preference is to apply for a re-examination by the USPTO, at a cost of approximately $2,500. This was significantly cheaper than going for a declaratory invalidation in court. If the USPTO agreed to re-examine, it generally (~80% of the time) resulted in a significant narrowing of the re-examined patent’s claims. The risk of course, is that in the ~20% of cases where it does not, the re-examination has the effect of making the patent seem stronger. As well as this work, the SFLC has of course worked hard on drafting the GPL v3 over the last couple of years, with its more specific focus on threats to free software originating in patents.


During the question period, the issue was raised of the TRIPS agreement and its tendency to create ‘clones’ of the US patent system in foreign regimes. This in turn raised the issue of the supposed absence of software patents in the EU. Fontana was dismissive of the idea that there are no software patents in the EU, and represented that idea as something that politicians tend to say without any factual grounding. His experience was that software patent claims granted in the US were regularly also granted in the EU with little or no adjustment.

The issue of community-driven prior art searches was also raised, and Fontana commented that these efforts, though they could be useful, were often misguided as a result of a failure to properly understand the scope of the claims of the patent in question. The first thing you need to do when looking for prior art is understand precisely what it is you are looking for, and patent-drafting idioms being what they are, this was not an easy task for the layperson.

Another questioner asked how the SFLC can advise participants in a global community like the free and open source software community when they were all US lawyers. Was there not a risk of malpractice, and if so how did they manage it? Fontana was clearly a little thrown by this question, but Eben Moglen, who was now due to take the stage, shouted ‘we carry malpractice insurance!’ from the sidelines.

Finally, a questioner mentioned a couple of projects that are designed to make the examination of software patents more rigourous and effective. The first of these was the Open Source Development Labs’ ‘Open Source as Prior Art’ project, which aims to identify software processes within previously published open source software as a form of prior art database. The second is the ‘Peer To Patent Project’, which aims to provide better examination of patent applications through community review. Fontana replied briefly that although both projects clearly meant well he doubted that either of them would do much good.


And so to the final talk of the day, given by the Legal Director of the SFLC, Eben Moglen. I had heard Moglen speak before and have always been struck by his skillful, machine-gun oratory. With the previous talk over-running by 15 minutes, Moglen clearly decided to shift up a gear to end on time, which he did, although my brain was some way behind.

Moglen began by recognising the flow of the previous talk, with its final questions on how the SFLC could hope to manage the role of advising the participants in a global phenomenon. Lawyers, he said, were by nature localists, confined by their qualifications to be experts on the law within tiny geographical reservations. The SFLC had to proceed despite this fact because, as Moglen put it: ‘we’re all they’ve got’. In the case of the recent Busybox action against Monsoon Multimedia, they would argue that - as Monsoon Multimedia made their (allegedly GPL-violating) software available via the web - it can be downloaded in southern New York and that was where they’d take action.

Using limited resources, it was the SFLC’s responsibility - Moglen argued - to identify where they could make the most difference and to apply themselves there. His hope was that, by becoming longitudinal specialists in this specific area, they could have a long term effect in fostering the global public good that the community offered. They must identify the projects that offered the most potential public good and nurture them. This necessarily meant working outside the US legal jurisdiction. At this point Moglen introduced Mishi Choudhary of the SFLC, who was sitting in the audience. Choudhary will be opening an office of the firm in New Delhi in 2008.

As well as practising, Moglen continued, it was vital that they publish. Information for coders had to be brief and direct - one screen, no disclaimers. He also hoped to ‘generate legal technology worthy of being copied’.

Moglen said that he hoped that in this way, it might be possible to effect a change in the practice of public interest law similar to the change that free and open development had wrought in the world of software. Finally - explicitly evoking Louis Brandeis’ activist promotion of ‘lawyering for the situation’ - Moglen said that the SFLC hoped to provide ‘lawyering for the community’.


By this time, it was just after eleven in the evening, according to my body clock. I had 30 pages of notes and a semi-crippled right arm. Nevertheless, I felt energised by the presentations and heartened by the broad approach to the community taken by the speakers. Moglen’s vision of a streamlined, longitudinal activist law firm was fascinating, and clearly born of a great deal of personal commitment and vigour.

Further reading


Related information from OSS Watch