Software engineering blog of Clément Bouillier

Thursday, November 10, 2011

Agile teaching thoughts

I come back from an event organized by Institut Agile (Paris). It was a first workshop on what could be done to leverage Agile learning in initial formation. First, we have shared visions in the afternoon (think big, too big some would say ;)), and we had not enough time to talk in detail about action plan (act small).
I have to say that I appreciate that type of event, mostly for point of views diversity, which leads to lots of exchanges.

Then in the evening, we had some experience feedback from Claude Aubry, Jean-Luc Lambert, Frédéric Dufau-Joël and Alexandre Boutin. Lots of interesting feedbacks, I keep in mind :

  • Avoid lecture, prefer interactive course, use lots of games (role playing)
  • In France, we have the challenge to change student mindset, which has been inculcated in until they get to (under-)graduate level. They have been too much protected and now they are not enough autonomous, confident and too naïve about reality and scared by errors, they need to be given sense of responsability and to let them do errors without reprimands
  • Avoid compartmentalized course, integrate them to make a coherent whole teaching project
  • Take care (avoid) of individual grading, encourage team collaboration and so team grading
  • Give real projects to lead using learned techniques
My current vision of Agile teaching

From our discussion, I would summarize the vision to adopt through answers to the following questions (it is my point of view, not necessarily shared with every members of the community) :
  • Why do we need to teach Agile ? To share Agile spirit that promotes fun, motivation, responsability, commitment, questionning..much more than techniques, it is a mindset. In the end, it would result for sure in a world with much more common sense than the one we live in.
  • Where do we like to go ? We need to influence public authorities, teaching program management through "lobbying" and why not leverage fundamental research in this field (even if I think it is partially covered with several human sciences and computer science researches)
  • What is the content to teach ? We have to build sort of referential of Agile practices and techniques and requirements to be able to understand Agile.
  • How do we teach Agile ? We need to share teaching practices, to make more and more experience feedback (in fact, to apply Agile principle to our approach...).
  • Who is targeted ? Every student that could be involved in an IT project given its initial formation, from the "classic" software engineers to management/commercial profiles (and even any student if we want to spread only the Why...).
Things I would like to dig

First, I would like to start teaching Agile with small courses. But then, I think it would be great to build incrementally a teaching project around following axes:
  • Around technical, behavioral and methodological integrated/related courses,
  • With lots of practice and games rather than lecture,
  • The whole applied through real projects that are needed by their customers (not fake ones)
  • With implication of professionals strongly related to teaching staff (ex: close relationship between almuni and teaching staff)
Now it is time for action. And I will continue to participate to this new community for sure. Thanks to all participants, I enjoy this moment.

Friday, October 7, 2011

Specialization in Agile (part 1) : other visions than hyperspecialization

I would like to write a post on this subject for a while (amongst lots of other…), and a post of Mike Cohn about Agile in the Age of Hyperspecialization reminds me the subject (in addition it is also a recurring subject at work for me). I wrote some comments on his blog, but I would like to detail a little bit here.

First of all, I am not confortable with the analogy done between manufacturing activities (or buildings and civil works) and IT industry, which basically leads to reduce developers to “simple” workers that could work in a production line (cf. Ford, with respect for manufacturing workers)…and I think that’s a point that is totally missed with hyper-specialisation.

I prefer other visions more suitable to our industry. In this first post, I will talk about “cultivate your code” analogy, “mentofacturing”, and feature teams approach.
Then in a second post, I will dive into an illustration of what could be stereotypical organization nowadays with “hyperspecialization” and my vision (shared and applied in my team), which is best aligned with Agile and Lean principles from my point of view.

“Cultivate your code”, a refreshing analogy

Classic analogies applied to IT industry are with buildings and civil works or manufacturing. But IT industry is fundamentally different from these industries. I will expose some arguments of another analogy, called “cultivate your code” I discover through Benoît Gantaume (in French but page translation can help non French speaker).

We often mimic processes of buildings and civil works. They are justified by the fact that once you have built a building, it is very hard to change it, it will not evolve as much as an application will. So they need to make lots of studies, to check every details before they lay the foundations. At the opposite, in software, it is very important to be able to build an application that can evolve quickly and potentially in depth (i.e not only on the surface). Moreover, an application that is not evolving will degrade through surface maintenance only, that’s why Benoît Gantaume compares it to vegetal with lots of comparison of day to day activities with those we can see in a garden (I let you read his post). The main idea is that a garden needs constant cares and a software is comparable in this way.

We like (in IT industry) to use manufacturing wording, notably industrialization. Note also that products are completely different, in manufacturing, products are designed once and built several times on production line. Software is absolutely not in the same case, it is unique, designed once and built once. About industrialization, we apply recipes of the XXth century (or even XIXth) with industrialization through (overly-)manual production line adapted to IT through hyper-specialisation. We do not use enough tools and automation at the service of individuals in IT whereas it is far more simple and cheaper than in manufacturing (installing software versus building robots). These arguments are not covered by Benoît, but I think it is like using the good tools to maintain your garden, rather than using only your hands. Even worst, we often use tools that constrain individuals, as if you would try to dig with a rake…

“Mentofactoring”

I heard for the first this word at USI 2011 event (for those understanding french – even if you have an english traduction also – here is the webcast). It was presented by Vincent Lextrait. He started writing a book and first chapters are available at http://www.mentofacturing.com/. I try to summarize key points, but encourage you to read at least home page, which give a good overview.

The reasons why Adam Smith advises divison of labor was based on different parameters than those that characterize work today. First, there was a shift with computers, that minimize cost of loosing time by switching tools. Second, there is far more variations in intellectual productivity than in manual productivity. Third, there is far more interpersonal dependency in intellectual activities than in manual ones.

It concludes that management (and so work organization) of these kind of activities cannot be the same as in classical industries (buildings or manufacturing typically).

Feature Teams

I read this from Craig Larman and Bas Voode book on Agile scaling about organizational tools http://bit.ly/oFtSAM, and specifically in chapter Feature Team (accessible here). The authors focus on Lean manufactoring in their book (finally even manufacturing has new vision…). It is really well explained, I will just paraphrase some part.

The main idea is “one single cross-functional team that completes end-to-end customer features”. It is justified by Lean theory : “In lean hinking, minimizing the wastes of handoff, waiting, WIP, information scatter, and underutilized people is critical”. They also insist on the fact that each team member has not only one speciality, each one has primary skills, but with help of other team members, each member can complete an end-to-end customer feature (reference to “generalizing specialist” introduced by Scott Ambler). Moreover, learning is at the center to share skills and knowledge.

The difference between Feature Team and Feature Project is also very important. With Feature Team, you have less organizational noise due to needed coordination when several Feature Project are involved on one application. It also capitalize on a group that learn and does not break up at end of a project. And third, a Feature Team has shared ownership of code, process and skills.

 

This three visions illustrate other visions than the mainstream that advocates hyperspecialisation. Next post will focus on an example.

Monday, January 3, 2011

Data backup solution based on RSync with a NAS

I have once experienced in the past some limited data loss due to a hard disk crash, and lastly, my first external hard drive starts to have some issues…I can just reiterate popular recommendations to think seriously to backup previous data as soon as you got more and more alerts, like repeated hard drive scans at startup (or when you plug it for external drives), suspicious behavior when reading data on drive…that’s what I have done lastly and I avoid lost of plenty of personal photos and videos…
From that moment forward, I decided to set a permanent backup solution. After having a look at web hosted solutions (not convinced completely convinced), I finally went for my own NAS, a DLink DNS-323, which is really easy to configure and extend (Linux embedded). It was also a chance to get hands dirty with Linux toys Sourire (long time…), but don’t be afraid to try! (except if you are just able to write documents and mails with a computer…else it could take you several long nights to get it running)

Rsync over SSH as the main toys

Rsync is an incremental files synchronization software for Unix systems. It is command-line based, but could be really powerful along with scripts. I let you search over the web for details on this tools, I will only show how I use it for backup. Note that there is several shared solutions around RSync. I was particularly inspired by wiki.dns323.info and BackupNetClone. I created my own scripts since the first one is too minimalist (based on BAT scripts…outch) and I found the second one too intrusive on clients computer (need SSH deamon and RSync server on each).

SSH will be used to secure RSync file synchronization.

To use it with Windows clients, the first thing is to install Cygwin (or other Linux emulator), really simple, you just have to click Next until package selection, then you select RSync and OpenSSH packages (just the main, dependencies will be grabbed automatically), and then you click Next until the end.

I will come back to client set up (don’t be afraid, it is just script that will have to be scheduled…) after a quick view on the server side, i.e. the NAS.

Set up NAS

My NAS, a DLink DNS-323, is Linux based. You have to use a fun_plug script that will be loaded at NAS startup. You can use ffp that includes some applications, particularly SSH and RSync daemons. Follow instructions in the following link to install it: wiki.dns323.info/howto:ffp.

Typically, you will set up a backup account on DNS-323 through admin interface (http://[NAS IP]), add a "backup" account in the Advanced tab. Next, you can change home and shell in /etc/passwd.

Set up clients

First, you have to configure the client once, then you would probably change configuration of which folders to backup.

First time set up

I explain here what you have to do once for each client computer (i.e. one to backup):

1. generate SSH keys that will be used next:

ssh-kengen -t dsa –b1024






You can let the default key path. Do not provide any passphrase if you want to automate your backups (it would ask it each time you want to backup).




2. copy SSH public key of client to the NAS with:




ssh-copy-id -i ~/.ssh/id_dsa.pub backup@[dns-323 IP]


I have packaged this in a script along with some simple configuration (IP, backup user name…).





What to backup?


My scripts (explained below) will search for configuration files, each giving one path to backup with its destination path on the NAS:



# Local path to backup, use /cygdrive/[drive letter]/... syntax
LOCAL_PATH_TO_BACKUP="/cygdrive/c/testbackup"

# [Optional] Target Rsync module -> override global settings
#TARGET_MODULE="backup"

# Target path in module
TARGET_PATH="test"



Launch a backup


I have a launchBackup.bat script that launches the backup.sh script through Cygwin. In this script, I load configuration from setup, I set up a SSH tunnel, then start rsync and finally close SSH tunnel.



RSync command is:



rsync -aivx --port ${SSH_TUNNEL_LOCAL_PORT} --chmod +rwx ${LOCAL_PATH_TO_BACKUP} 127.0.0.1::${TARGET_MODULE}/${TARGET_PATH}



Parameters name talk by themselves, –aivx are some common options of rsync. I don’t have yet set up incremental backup with --link-dest (hard linking option) and I am wondering about using –-delete that removes on server also what have been removed from your client folders (then you have to make sure that one server path is only used by one client to avoid massive deletions…).



Don’t forget to check your Firewall settings if you get some “Connection refused”-like errors.



Scheduling


You can simply rely on Windows Tasks scheduler. And you are done!



Assessment…



Not so pricy, I got the NAS for 100€ + 70€ for 1,5To hard disk drive. It is quite easy to set up, open as you are the only master of your backup, and then easily configurable/extendable and with unlimited possibilities.



About security, it removes hardware failure but do not protect from other more serious domestic risks like burglary, fire…but for that I got an idea, it is to build a small network of NAS like that (two to start…) with some parents for example, providing us a backup solution by the way Sourire



And a final word about environment impacts, I have bough an energy meter and it consumes only 10 Watts when idle (most of the time), quite good finally.

Sunday, January 2, 2011

Combine Hierarchy/Work Item/Date dimensions in TFS 2010 cube

I start using TFS 2010 cube to make a Product Burndown chart based on Story Points (Y axis) over time (X axis) and State (series) of User Story and Defect work item types of our custom process template (customized from User Story and Bug ones found in MSF Agile process template).
 
Basics...
With Excel, it was really simple to do it: connect to the cube, use Date and Work Item Dimensions with Story Points Measure and tada, it is done. Ok, well, in fact, it was not exactly what we would like, because we work on a legacy application...so we have User Stories and Defects on different subjects, what we have modelled using a Project work item type that contains related User Stories and Defects.
For example, we have a "Maintenance" Project (not really ending btw...) which contains all production bugs we are fixing, and "Project A" and "Project B" Projects, each ones classically having a given budget and start/end dates.
 
...deeper...outch ! Problem !
My team has several Projects on one application at the same time, each from 20 man days up to several hundreds man days, but we keep only one iteration/product backlogs for the whole team on this application. Then, each Product Backlog item is related to one Project.
Then, I would like to have a Product Burndown chart restricted to items related to a particular Project. It would help to see how its Product Backlog items are evolving over time and to manage effort needed to keep this Project on track.
I would think that Work Item Tree Dimension would help me...but trying to add it as filter to my Excel report, it does nothing !
In fact, it is an expected behaviour. I understand it digging into SQL Server Analysis Services features (I never had a look at before...) and TFS 2010 cube configuration. There are several explanations:
  • Dimensions are associated to one/several Measure Group, and the 3 dimensions I 'd like to use are not all together in a same Measure Group, Measure Group examples:
    • Work Item includes Work Item and Date Dimensions, but not Work Item Tree one
    • Work Item To Tree includes Work Item and Work Item Tree Dimensions, but not Date one
  • Story Points Measure is a calculated member associated with Work Item History Measure Group, and it is calculated based on hidden Story Points Measure of this Measure Group

TFSCube-MeasureGroups

Trust me that other Measure Groups do not include at least 2 of these Dimensions...and none include the 3.
 
Solution
So the solution was:
  • to add a view to TFS datawharehouse combining Fact tables containing Work Item facts and Hierarchy facts (no change to datawharehouse loading process!)
ALTER VIEW vFactWorkItemHistoryToTree
AS
SELECT wih.*, witt.WorkItemTreeSk
FROM dbo.FactWorkItemHistory wih
INNER JOIN dbo.DimWorkItem wi1 on wih.WorkItemSK = wi1.WorkItemSK
INNER JOIN dbo.DimWorkItem wi2 on wi2.System_Id = wi1.System_Id
INNER JOIN dbo.vFactWorkItemToTree witt on wi2.WorkItemSK = witt.WorkItemSK
GO
  • to change TFS cube DataSource to add the new view and link it to related Dim tables (derived from FactWorkItemHistory for example), just use the designer included in Business Intelligence Studio (you open Analysis Services database directly on server with it)

DataSourceViewDesigner

  • to add a new Measure Group with the 3 Dimensions I need (derived from Work Item History Measure Group for example, with Work Item Tree dimension added)

TFSCubeModified-MeasureGroups

  • to add a calculated member based on hidden Story Points Measure of the new Measure Group (within Business Intelligence Studio, open Team System cube –> Calculations tab to add the new calculated member, and associate it with Measure Group with Calculation Properties icon)
-- Story Points with Hierarchy Tree dimension
-- Just a part of MDX request to show we use vFactWorkItemHistoryToTree member of our new measure group...
CREATE MEMBER CURRENTCUBE.[Measures].
[Microsoft_VSTS_Scheduling_StoryPoints_Tree] AS
...
Sum
(
[Date].[Date].[Date].MEMBERS.Item(0) : [Date].[Date].CurrentMember
,[Measures].[vFactWorkItemHistoryToTree Microsoft_VSTS_Scheduling_StoryPoints]
)
...
 
CalculatedMember
 
...even deeper with the same solution
Then, I can do what I wanted with Excel, i.e filter on each Project to have a Burndown chart on each.
Note it can be declined with any work item hierarchy. For example, we also have a Release work item type in our process template, allowing to manage releases contents. Then we can follow how Release backlog evolves through a Burndown chart.
 
project burndown chart
 
Don't be afraid to look at TFS cube, it took me 2 days to find out what I need (starting with no skills in SSAS...). It can be very powerful.

Tuesday, November 30, 2010

Alt.Net Paris session summary : "Build your own service bus" by Romain Verdier & Julien Lavigne du Cadet

Romain and Julien made a presentation/feedback about their experience in implementing their own Service Bus in ABC Arbitrage (thanks for the commodities). In addition to this summary, I encourage you to have a look at their (impressive Prezi) presentation.

They started with an overview of the context in which service bus notion is considered, reviewing some low-level concepts of MOM (Message Oriented Middleware) and giving a quick view on EAI/ESB concepts. These are the two exterme sides of the subject and set their implementation around two goals:
- add an abstraction layer to the MOM solutions, with lot of wanted conventions that help to simplify configuration, but keeping the power of these tools like interoperability, reliable message infrastructure...
- make far more simpler than EAI/ESB solutions and platform specific
Obviously, this subject is around Message driven architecture.

Next they went deeper in service bus concepts, that some solution they cited also implement. As they said, they did not invent anything.
First subject was about Message dispatching (how):
  • send: a message to one conterparty waiting a response
  • publish: a message to anyone who wants to pull it without any response
From these two methods, we see that there is two sorts of Message that they implements with two distinct Message concepts:
  • Command that are sent -> tell someone to do something (note the imperative tense)
  • Event that are published -> tell anyone wants to listen that something happened (note the passive tense)
Next, they talked about Sagas that allow to take care of long-running transactions. Their example of a trading application was judicious to understand it. Basically, it receives a "complex" trading order that will be decomposed in simpler orders (in order to accomplish the requested order without revealing the whole intention to the market...).
  • A Saga is initiated by a Command or an Event
  • It starts a process that send Command/publish Events
  • It goes to sleep until waken by another Command or Event
  • ...and it starts over, until the Saga determines it has reached its goal, then it ends.
They also quickly talked about some other (important) details like Pipelines for cross-cutting concerns, Time Service to manage Saga time out typically, Prototype Buffer for interoperable message serialization...

The main thing I would remind about implementation is the convention based aspect. It simplifies configuration (5 lines as they said) and hides some technical details, and then it allows to build the software with a bigger insight in Business value that any software should give to their users. It could be seen as a restrictive framework (just two concepts), but I truely think that it helps abstract from technical topics and focus on business value, and that these concepts are well suited to describe business goals.
Note that we did not talk about lots of related subjects like Event Sourcing, DDD...perhaps the next time :)...but we can say that Message driven architecture does not exclude other paradigms.

error MSB6006: "ResGen.exe" exited with code -532459699

I write my first tip blog post to talk about the following error I got stuck one hour (far too long...) without any search results on Google:

error MSB6006: "ResGen.exe" exited with code -532459699

I got this error when compiling a project targeting 3.5 framework with VS2010, which includes some resource file with CustomAction set to generate associated code.

In fact, I was doing some change to my configuration. Long time ago, I had set up the use of DEVPATH (configure it in the machine.config on dev machine). Due to a problem explained next, I have removed the DEVPATH environement variable, but keeping machine.config as is (to avoid full reconfiguration in case I would need temporarily the DEVPATH...ok I am really lazy :)).
Then, I discover that ResGen.exe fails, I find it when I launched compilation from command line with MSBuild and it was clear in exception trace details...really bad! (note that it works when you do not have RESX files...)

By the way, you should avoid using DEVPATH. It tries to replace GAC, but in fact it fails since it does not take care of DLL versions. But do not through it away to quickly, it could be useful to debug tier DLL for example without modifying all your references...
I found it when I have to install EntLib 5.0 side to side with EntLib 4.1...then only DLL found in first path referenced in DEVPATH was taken in account.
So 2 solutions: you are not lazy and you remove configuration in machine.config OR you are lazy and you set DEVPATH environment variable to something else than empty one (for example "D:").

Tuesday, November 23, 2010

Team Foundation Server Check In Policy Pack

I search over the web for some Check In Policies for TFS, and some were founds, some really interesting in my case. I think to:
But there are several drawbacks:
  • TFS Power Tools Check In Policy pack is lacking open source...you cannot contribute, too bad when you miss just an adjustment to an existing policy
  • Configurability go from none to too much
  • Too much packages with different heterogeneous deliverables (package, docs...)
So why not an open sourced Check In Policy pack, on codeplex for example...I think we can gain from homogenization of development practices for Check-In Policies.

Here are some practices I think to:
  • MSI deployment with Wix : one feature by policy for example to allow flexible installation
  • Policy configuration management unified
  • Encapsulate common functionnalities in shared components (path exclusion...)
I was thinking to this pack to share my first Check In policy around Work Items, when I discovered that TFS bundled one and TFS Power Tools one are really basic and not contributable (do not allow link/hierarchy query) !!

Could be great if several projects could merge in one...