Being Creative and Productive despite of Insane Working Pace

This is the first part of the series “Busyness, Productivity and Creativity. In this serious I discuss why we have to slow down to get things done faster and better. The series consists following parts:

  1. Being creative and productive despite of the insane working pace
  2. Two organization anti-patterns: Urgency as Blame and Busyness as Heroism
  3. The Good Sides of Busyness – Busyness and Urgency as Heuristics for Cadence
  4. 120 000 Nails – Urgency and Busyness as Motivators

Puzzle analogue and slack

So, to be productive, information worker people need some slack – empty space in the calendar. Thus if you keep them busy all the time and maximize the utility, the people get less things done. What really matters is to get things done, not the amount of working hours. Sometimes, by doing more work, less work will done? Counter-intuitive, eh?

Tom DeMarco illustrate in his book “Slack” the need for slack as follows:

The puzzle A is solvable; puzzle B is impossible. Yet, in puzzle A there is approximately 11% unused space while in the puzzle B that all space is utilized. 11% of waste? The innovations dwells in the organizations ability to learn, adapt and change. The empty space needed for change and leaning isn’t waste – except in Excels!

Also e.g. Donald Reinersten underlines the importance of sufficient amount of slack in the organization in his book “Managing the Design Factory”. Reinersten does not use term ‘slack’. Instead he discussed lead time, queue length, capacity etc. In practice, his recommendations and conclusions are similar to DeMarco’s. Reinersten’s mathematical approach shows that slack makes sense also in the Excels if you measure correct things.

The third book reference: In his book “Your Brain at Work” David Rock’s book discusses this subject form the neurophysiological point of view. There are limited amount of resources in the brain. Creative thinking require a lot from your brain, the all irrelevant noise – like awareness of the deadlines that are not attainable – lower the probability of a creative insight. The most of the time the busy people just “survive” – they don’t “shine” since they don’t have time for that.


Last spring I had a very nice discussion about busyness, rush, pressure and their impact to creativity and organizational efficiency. I had just referred DeMarco’s Slack and claimed that continuous busyness and one-eyed attempts to maximize utility of people have tendency to kill creativity and ability to change. As bonus people get burnout sooner or later – or are smart enough to apply another job before that. My opponent, a smart Russian game developer replied:

[My busyness] is nothing compared to working in an advertising agency. Advertising is a job where you usually sleep at the workplace. And people usually just burn out after some time… It is strange (doesn’t fit into your [claim that Rush kills creativity and ability to change]) but creativity actually flourishes – at first, before a person burns out. In the other company – software startup – they introduced plans and procedures to avoid rush. But it lead to people becoming more passive and non-creative over time, because some air of suddenness and inspiration disappeared. So it was good for some people, who were new; there will be no sudden changes just before deadline, but not so good for those who used to act on inspiration – like being half-asleep for a week and then do the week’s work in three hours.

Embracing – was forced to restructure my argument. What follows next, is my revised answer to her excellent counter-argument. I have divided my argument into two parts:

  1. Why some organizations perform well and are highly creative, even if the working pace is insane?
  2. Why often lack of rush seems to lead passivity and lack of inspiration?

Why some organizations perform well and are highly creative, even if the working pace is insane?

These counterexamples against my argument are probably rather common, but nonetheless probably base on misconceptions or fallacies. I know that there are few opinionated studies saying something different, as my studies poses rather strong counter arguments against them.

It’s true that advertisement offices do time to time highly creative work and performs extremely well. However, you cannot deduce that they are creative because of the overly thigh schedules and busyness.

I presume, that the creativeness in advertisement agency can be explained by following four factors:

(1) Motivated, competent and experienced people. In advertisement offices the workers are often highly motivated and competent. Some of them have practiced arts (e.g. drawing) for decades and have high level of formal education on arts, while other have formal education on economics.

(2) Diversity in the working community. Diversity of working community increases amount of innovative solutions: in advertisement offices some of the workers are highly art oriented while others have financial stance and are money oriented. This kind of diversity and multitude of perspectives is good for creativity.

(3) Working environment. Working environment and way of working in itself is fruitful. Workers often enjoys of high level of autonomy and high level of trust (at least for the “high performers”). The success of a campaign depend greatly on them and thus, there is no place for indifference. In a way the results are “part of you”, and therefore you want them to be as good as possible.

(4) Short feedback loop. The feedback loop between and idea and response is relatively short. Often you see, if the idea worked within few hours, rather than within few months. In order to optimize learning, you have to shorten the feedback loop.

I argue that excessive busyness and rush make advertisement offices perform worse than they could. They have very good starting point, and then the greed spoils the most creative edge of the working community. The greed not only make they perform more poorly, it also endangers their health. According the studies I’ve referred, overly busy advertisement offices would do a lot better results and they would do even more creative campaigns, if there is enough slack in their schedules.

How much slack is needed?

I do not claim, that there must be only slack – that does not work either. Creative workers need some empty space, but only some. So, how much slack an information worker needs? I have not seen exact numbers. Probably, the proper amount of slack depends greatly on the person and on what you are doing.

According Donald Reinersten, for ordinary software development company the optimum performance is achieved with 60-80% relative utility ratio (2010, “The Principles of Product Development Flow”). If the utility score is over 80%, usually there are long queues that add no value but just costs. That is, there should be 20-40% of slack in order to make the organization perform optimally. Then again, Reinersten discusses organizational performance only, not creativity or performance of an individual person.


Next part: “Two organization anti-patterns: Urgency as Blame and Busyness as Heroism


Toward the Game Theory of Everything

On July 22nd Business Insider published an article titled “They Finally Tested The ‘Prisoner’s Dilemma’ On Actual Prisoners — And The Results Were Not What You Would Expect“.

There are few different variations of the dilemma. The study uses this one: “Two criminals are arrested, but police can’t convict either on the primary charge, so they plan to sentence them to a year in jail on a lesser charge. Each of the prisoners, who can’t communicate with each other, are given the option of testifying against their partner. If they testify, and their partner remains silent, the partner gets 3 years and they go free. If they both testify, both get two. If both remain silent, they each get one.”

The article claims that as per the game theory betraying the partner should be the dominant strategy even if mutual co-operation would be the best outcome for the both player. (This interpretation and analysis of the dilemma is a bit too straightforward to my taste. See Stanford Encyclopedia of Philosophy’s article on the prisoner’s dilemma for more complete analysis.) For me it wasn’t a surprise that humans are more cooperative than the purely rational models used traditionally in economics predicts. Applying the game theory to social reality is a tricky thing to do right.

Then again, the game theory could work, but… That’s the subject of this blog entry.

Evidence against the game theory?

Among my social network the most common interpretation was that the study is a counter-evidence against the games theory. Prisoner’s dilemma in overall can be seen as a counterexample against the game theory.

I’m unwilling to do such a hasty conclusion. It might be an evidence against the self-interested, calculative and rational behavior suggested by the game theory. On the other hand, the study may as well just show that our conception of what is beneficial or desirable for an individual is too limited, or that the used interpretation of rationality is too narrow.

A rational agent

A problem in the game theoretical models is that every now and then even intelligent and well-informed persons behave differently than the model predicts for reasons that are intuitively clear and understandable for other. To be useful a model need to be complete and errorless, but if it gives false predictions that are in addition counter-intuitive there something wrong in the model. In my opinion the problem is not in the game theoretical approach per se but in the preferences the models (arbitrarily) expects the rational agents to have or to not have.

Since I don’t want to take preferences for granted, I start by defining a rational agent slightly differently than normally: A rational agent always tries to attain the most beneficial overall outcome for itself in a systematic and consistent way. A rational agent can be a person, organization, machine or software. The most beneficial outcome for an agent is defined directly or indirectly by its needs (e.g. it is rational to seek food if you are hungry).

By ‘need’, I refer a mechanism that makes an agent to prefer one alternative over another, rather than to prefer nothing. In case of organizations ‘demand of something’ is a need. In case of machines, the need is a preprogrammed “state of satisfaction”. E.g. think of an intelligent painting robot. It have a ‘need’ to paint ever point of surface with minimum amount of paint and within minimum amount of time. That state of satisfaction defines and determines its decision making and learning everywhere. In case of human, ‘need’ means pretty much what you expect it to. We have a basic need for nutrition, and not-so-basic need for financial prosperity. Our ability to evaluate benefits of different outcomes is derived from the needs. We are able to evaluate financial value of an object (most likely) because we have a need for financial prosperity, otherwise we would find such an evaluation irrelevant and obscure.

Within this definition an agent is irrational only in two cases:

  1. if an agent does an error in deduction. I call this ‘unintentional irrationality’.
  2. if we have a bounded context for rationality in which we ignored intentionally some of the de facto needs an agent have and the agent’s decision was based on an ignored need. I call this ‘contextual irrationality’.

My position here is naturalistic: I don’t take preferences, goals or expected value of an outcome for granted or as objective facts. There is no god-like point of reference or a platonic world or ideas that would make an outcome valuable or preferable. There is only a system (or a machine – organic, abstract, electronic etc.) that produces the preferences and the evaluation functions from the needs and the information the system have or can achieve. In case your backgrounds are in the continental philosophical tradition, I believe that the ‘needs’ are more or less equivalent to Deleuze and Guattari’s ‘desire machines’. Yet, I’m not completely sure on that.

The question on self-interest

Back to the prisoner’s dilemma.

According SCARF model (see David Rock’s article on SCARF form NeuroLeadership Journal: autonomy – control over one’s environment; a sensation of having choices – is just one of five primitive needs we have. There are at least two others that are relevant in the case of prisoner’s dilemma: social relatedness and fairness. According SCARF model relatedness and fairness are as strong needs as autonomy and may even be as strong as Maslow’s basic needs (e.g. food and physical safety).

If SCARF model is true, a person who have a strong need for fairness and who acts systematically and consistently to make the world around fairer, is not only rational, but also behave in self-interested and calculative way.

You cannot claim that ‘minimizing the expected time in prison’ is necessarily more rational, beneficial and self-interested goal than fairness toward the other prisoner – for some actor it is, for another it’s not. At least, you need one more premise to make such a conclusion: “For agent X objective to ‘minimize the expected time in prison’ base on presently stronger needs, and thus is presently more desirable for him, than the objective to do a maximally fair and mutually optimal decision.”

If the needs of an agent did not define what is self-interested behavior, what would? The theories of rationality are often biased by an idealistic conception of self-interested behavior. The contradiction between altruism and self-interested behavior is often delusive, but still possible in certain situations.

Simplistic game theory

In order to fulfil needs and desires alike fairness or relatedness, a rational agent needs to do such calculations for which classical game theory is a bit simplistic and naive.

The game theory is not simplistic because it seems to ignore psychological facts about human nature; because it presuppose that all actors are rational but in real world we sometimes seems to act irrationally. Psychology is here somewhat irrelevant. You can see the game theory as a purely mathematical theory that – obviously – is poorly adapted into psychology and sociology.

A game theorist can always claim that our mathematical model of the psychological reality is incomplete (within the bounded context we are interested in). If it was complete and sufficient within the context, the game theory would be fully applicable. A game theoretical model could take (for instance) emotions into account; it could mathematically model them within sufficient extend –not completely but extensively enough. If a practical application of the game theory doesn’t do that even if it should, it does not prove anything about the mathematical core of the game theory.

Rather, the concurrent game theory is simplistic, because it focuses almost solely on the first order desirable outcomes and needs and ignores almost completely the second and third order desirable outcomes and needs. Because of that, you can use it only in limited contexts. Often the limitations are a bit too strong and a game theoretical model becomes rather only a theoretical model than a practical one.

Winning strategy vs. heuristic strategy

The second order preferable outcomes and needs relates to the properties of the structure of reality. Fairness and certainty are examples of such properties. (For sake of clarity, I ignore “non-living machines” for now on, as we don’t have machines that do decisions on this level.)

While the first order outcomes are immediately attainable factual things or states of things (e.g. freedom or two years in prison), the second order outcomes are something people want to maintain, improve or change, but they cannot have or attain them. That is, people cannot achieve fairness as a factual thing, but they can make the world around more fair.

Achieving a second order desirable outcome requires a heuristic strategy rather than a winning strategy: “What it is means to be fair and to be treaded in a fair way?” before “how to ensure fairness regardless of what ‘moves’ the other agents do?” A rational agent cannot know in advance what would be the optimal, because the optimal state is not a state the actual world but of the all relevant possible worlds. The meaning of fairness depends upon what an agent is able to expect from the others in any actual and counterfactual situation. The heuristic strategy is about finding the relevant possible worlds. It is noteworthy that ‘possible world’ is already a concept of the game theory. It is needed for to understand probabilities. Then again, possible world are more than just probabilities and the game theory seems to ignore many aspects of this concepts.

Winning strategy vs. reflective strategy

The third order desirable outcomes and needs relates to generative patterns of observed social, physical and internal reality. They are building blocks of the subjective reality. The needs for creativeness and relatedness are examples of them.

Often, the theories of rationality faces serious problems with the third order outcomes and needs. Why an artist wants to be an artist even if the society sees him as an obsessed outcast and he has barely enough money to food and shelter? Why a soldier was ready to do die (and also died) in order to protect his family and countrymen? For a theory of rationality, it’s problematic if you need to explain some systematic and consistent decision making paradigms – like artist’s passion to do art or soldier’s patriotic self-sacrification – as insanity and irrationality even in those cases we emphatically understand the reasons behind the decisions of an agents and they are not insane or irrational. I’m saying that dying for others – for instance, can be completely rational and self-interested action. Its rationality depends upon how you define yourself and your relation to everything else.

Now, consider creativeness and a need for creative insights. There is no heuristic map nor winning strategy toward creative insight. Creativeness is not attainable state of things in actual or possible worlds, nor is it about understanding what it is all about. Yet, it is not random, arbitrary thing either. You can have a strategy toward a state in which your need for creativity is temporarily fulfilled: a reflective strategy. The reflective strategy is “pattern matching with your life and identity”. The both examples illustrates that suddenly the question “who is the subject called ‘me’ whom an outcome is more beneficial than another one?” always precedes the question “what actions are maximally beneficial for me in this situation?” Re-identification and re-initiation of the agent itself is a rational strategy toward ‘a better real’.

This is rather far from the classical theory of the rational agent and the concurrent game theory. Nonetheless, the idea is simple, an agent is a system that can affect surrounding reality that is also a system. The system of self is not separate from the system of the surrounding‘. Thus, a change in the system of self is always a change is the system of surrounding reality (including both the actual world and the possible worlds). This is the reason why changing your perspective to a hard problem is a very efficient problem solving method. That is, a change of self is obviously rational action, but it’s hard to explain why exactly this change was rational – why I chose this change of self rather than that? Even in case of problem solving this is far from obvious; social situations requires are far more complex rational changes of self.

Perhaps, a game theorist could claim that the second and third order goods and needs can be reduced into first order desirable outcomes, preferences and needs. I find such a claim poorly justified. I personally think that the need for creativity or fairness for instance are irreducible to first order needs like physical pleasure, status (or observed appreciation), satisfaction and safety. There are games that can be understood fully via a winning strategy of the traditional game theory (e.g. chess). Social reality, for instance, is not such a game. In order to attain the most beneficial outcomes for us in the game of social real, we need also heuristic and reflective strategies. After all winning is irrelevant, if we win things that does not matter.


The study mentioned above is not necessary an evidence against the game theoretical approach to human behavior. It just clarifies the boundaries within which the concurrent game theory can be applied. Despite of this study, the game theory can still be applied within certain, rather strict constraints without any problems. Actually, the study – as presented in the article – is about internal logic of a rational agent rather than the game theory. Perhaps in future we have the game theory of everything that would apply in any situation. Actually, I hope we do, since the game theory is pretty cool thing. 😉

However, in order to have the game theory of everything we must understand better the internal logic of rational agents and the ways an agent interacts with the surrounding reality. We also have to answer more accurately to the question “in which sense rational”. And at last, we have to pay more attention in the modalities and the semantics of possible worlds.

My TechEd Europe 2013 Recap

For some strange reason I thought that I would have time to shortly analyze the all session I participate in at the end of day. I tried that but on the third day I had to give up. In this blog post I shortly comment the all sessions (including the one I have already blogged). I include link only to the European version in Channel9. The most sessions were recorded and are already available as screencasts or videos in Channel9. The only exception I know are the preconference seminars from day 0. They were not recoded.

As a comment on my strategy with sessions: I tried to avoid taking session that discussed things that I already knew well. Secondly, I tried to avoid session having rating 200 (intermediate) or 100 (basics), and focus on advanced (300) and expert level (400) stuff. On each day I did few exceptions to this especially if I was tired and wanted to relax or if I simply did not found anything else interesting.

Day 0 (Monday)

From 0 to DAX (no video)

Comment: Nice introduction and clear to the DAX syntax. I saw only the first part of this, since I chose to visit in two pre-conference tracks.

Install and Configure Microsoft SharePoint 2013 (no video)

Comment: I missed the first part, in which presenter probably installed servers. On the afternoon part, they installed and configured SharePoint. I got few good tip and tricks, but there rather little new to me.

See also my earlier blog post on the Monday.

Day 1 (Tuesday)

Keynote 1: The Cloud OS: It’s Time!

Comment: The first thought after this was “O-key, it’s cloud time – once again. According Microsoft it have been cloud time for last three years.” I admit that Microsoft have done many good improvements to its tools and services. I especially appreciate that now MSDN subscribers get some resources from cloud for free.

Modern Application Lifecycle Management

Comment: Brian introduces new improvements in Team Foundation Server 2013. There are many good improvement in web access I truly like. However, clearly the best improvement in Team Foundation Server is that it supports now Git.

Do You Have Big Data? (Most Likely!)

Comment: Surprisingly good presentation on big data and Microsoft Hadoop implementation HDInsight. Some ideas after the presentation: “Processing data in Hadoop takes time. Or perhaps you just don’t want to use Hadoop for calculations that would be fast.” “Tools as somewhat primitive, but I cannot deny that HDInsights web console is (still) pretty cool.”

I took session on Hadoop on TechEd 2012, last year. Then you had to write some Java to get any data out of Hadoop. Quite a progress on tools and .NET support. E.g. now there is ODBC driver for Hadoop you can use in .NET code, SQL Server Analysis Services and Integration Service and with Excel and other office tools that supports ODBC data sources. Still, as far as I know, the .NET tools and APIs for Hadoop are somewhat limited. Therefore, if I needed right now write some code on Hadoop, I would seriously consider using Scala or Clojure instead. (Yes, no Java, if I can avoid using it. Functional programming shines when you need to manipulate data.)

Advanced Debugging of ASP.NET Applications with Visual Studio 2012

Comment: This was one of the greatest disappointments. The biggest problem in the presentations was that most demos failed partially or totally. Yet, there were few food tip and tricks I wrote down.

See also my earlier blog post on Tuesday.

Day 2 (Wednesday)

Building End-to-End Apps for Microsoft SharePoint with Windows Azure and Windows 8

Comment: I was a bit late from this, as the session I planned to have was canceled due absence of presenter – and that was announced 15 minutes after the session should have started.

Anyway, what makes this presentation interesting is the extensity of the demo solution. The applications utilized cloud database, cloud hosted SharePoint apps, SharePoint document libraries, SharePoint workflows, Win8 apps and Windows Notification Services. (I might have missed something.)

Building Modern, HTML5-Based Business Apps on Windows Azure with Microsoft Visual Studio LightSwitch

Comment: LightSwitch have been for long in my “check out this” -list. I’m glad that I finally did use an hour for LightSwitch. I also took a lab on LightSwitch. I didn’t complete it as at some phase I started to get “SQL server compilation task related exception”. As The Holy Google recommended to upgrade an extension in visual studio, I gave up. I’m not going to upgrade anything on lab machines. Next I have to figure out in which case LightSwitch is the best solution and in which cases you should rather use Access Services, InfoPath or build the application from scratch by using ASP.NET or Silverlight or something similar. This is definitely worth of another blog post.

Flexible Source Control with Team Foundation Service and Git

Comment: Even if there was not much new for me on this presentation, I can definitely recommend this session. I discussed with Martin Woodward on Tuesday evening on our needs and constraints relating to version control. He agreed with me that Git is better alternative for us than TFS Version Control system. Our discussion might be a reason why he underlines in this presentation that Git is very handy e.g. if you need to deliver source code to customer (as we often need to). A longer answer to question ‘why’ is worth of another blog post.

Cybercrime: The 2013 Ultimate Survival Guide

Comment: I was a bit tired and wanted to have something entertaining (but still useful). This is a fluent and inspiring presentation on cybercrime. I have to say, that it clever to have this kind of very appealing hacking presentation and deliver a lot of security related improvements at the same time.

The hidden agenda in this kind of “the gray hat security presentations” seems to be: “(i) Did not know that currently security is a big issue? You should seriously prepare for cyber threats. (ii) Did you know that there are severe security problems in Java and Android? (iii) But hey, we have lately invested a lot of in security. If, or better as you want to invest to the cutting edge security technology, buy it from us.”

Very nice rhetoric. It took quite a while to identity this chain of arguments from me – and I’m professional on argumentation and analysis of it. (My major in University was theoretical philosophy and it’s heavily about argumentation analysis.)

Build Data-Rich Solutions Faster with Microsoft Visual F#

Comment: Great talk. Alike Dustin Chamber, I’m a F# fan boy. It’s really pity that is not used more widely.

Day 3 (Thursday)

Keynote 2: Windows is the Future

Comment: Even if the title is silly, the content was good for a keynote. I liked this keynote more than the first one, even if the subject was not quite what I do for living. Windows 8.1 brings quite a lot of good upgrades – and it’s free.

BUILD 2013 Recap

Comment: Good overview on what new there is in Windows 8.1 for developers.

Real Experiences and Architectural Domain-Driven Design Patterns Applied on Microsoft .NET Development

Comment: This is a really good introduction to Domain Driven Design. After the talk Rovegård told that he is going to put the source code of the examples to GitHub. Once he had done that, he’s going to inform that in his blog

If I have time, I’ll “translate” C# implementation into F#. I suspect that F# as a functional-first, multi-paradigm language fits better to DDD than C# (or any object-orientation-first languages in general). This is just a gut feeling. I’d like to test my presupposition in a case having eloquent, corresponding C# implementation so that I can easily compare F# and C# implementations.

A Journey to the Dark Side of Social Networking

Comment: I have commented this already here. Great stuff.

Hackers (Not) Halted

Comment: One more “gray hat security” -session. See the related blog post. The session consists handy tip and tricks for the dog owners, who needs cheap pants for their dogs.

Day 4 (Friday)

Deep Dive into the Windows Azure Active Directory Graph API: Data Model, Schema, Query, and More

Comment: I planned to take Scott Hunters Web API session, but unfortunately, if was full. So I took this instead. Until now, querying users from AD have not been too easy. E.g. if you needed to write a custom client-side people picker control, you have had to build service side implementation by yourself. In future, this is not those case if you can use Windows Azure AD.

Web Deployment Done Right

Comment: The title is misleading. It should be “Why in Windows Azure Web Deployment is Done Right”. The presentation is solely about Continuous Deployment on Windows Azure. If you haven’t seen earlier, how it happens, check this out. The most interesting thing I didn’t know was that in Azure Deployment from Git is significantly faster than deployment from TFS Version Control system. We have had similar benchmark result when comparing Git + TeamCity and TFC Version Control + TFS Build automation. The first combination is significantly faster.

Authentication and Authorization Infrastructure in Microsoft SharePoint 2013

Comment: I was one of the best sessions in TechEd. First part the presentation is definitely worth of watching even if you don’t do SharePoint development. It’s mostly on claims based authentication (CBA) model and Azure Access Control Service (ACS). Second half is a deep dive into SharePoint authentication model and its extensibility points. On the second part Pialorsi for instance demonstrates how to use Facebook to authenticate into SharePoint and explains how to implement own claim provider and deploy it to your own on premises SharePoint farm.

Getting a Designer/Developer Workflow That Works

Comments: I have mixed feelings on this presentation. On one hand there as few very good ideas. (I would say them good, even if I had not done something similar.) But on the other hand, rest of the presentation as either a bit naïve. I also disagree some of the author’s idea. I doubt, if it is really wise wireframes as a tool in contract negations in a way the author recommends. It is hardly the optimal way to deliver what customer needs and not only what he had ordered. Designer/Developer workflow is among those subjects I’d like to write more.

What’s New in Windows 8.1 Security: Modern Access Control Deep Dive

Comment: This was surprisingly inspiring presentation. The main argument was “Passwords are a poor way to authenticate. Don’t use passwords for authentication – it is not absolutely necessary. Here you have three better alternatives: biometric fingerprints, TPM key attestation and virtual smart cards.”

Gray Hats Makes Software Security More Entertaining

I really like the new appealing, “gray hat” –approach in software security presentations. It is far more interesting to hear first how to hack a system and then how to avoid such an exploit, than to attempt to memorize a long security checklist. After all people are driven by motivation, and not by constraints. That is, in order to really care about security, you first have be aware of the consequences of carelessness.

Here you have a list of those “Gray hat” talks I have seen and can personally guarantee that they are entertaining (and perhaps also useful):

Presentations by Andy Malone

A Journey to the Dark Side of Social Networking:

Cybercrime: The 2013 Ultimate Survival Guide

A list of remaining presentations from Andy Malone:

Presentations by Paula Januszkiewicz

Hackers (Not) Halted

A list of other videos from Paula Januszkiewicz:

By Marcus Murray and Hasain Alshakarti

Live Demonstration: Hacker Tools You Should Know and Worry About

APTs: Cybercrime, Cyber Attacks, Warfare and Threats Exposed

A list of other videos by Markus Murray and Hasain Alshakarti:

Day 2 in TechEd 2013 – On cloud, Visual studios new features and big data

Cloud OS: It’s Time – Again

Keynote is available on Channel9

Few comments from me:

I suppose that the first time I heart Microsoft telling that the cloud era is going to begin the year was 2010. And again it time for cloud? I have pondered if the reason for the change in the strategy was the fact that by service business is more profitable than just developing and selling a software product. In addition, services guarantee more static cash flow.

After the keynote another possible reason popped into my mind: Developing services allows you to shorten time-to-market. This is actually important factor, since currently often many other technology companies – like Google – are able to deliver more modern technology before Microsoft. As a consequence new product releases feel old on the day they were released. E.g. people are talking Google docs instead of Office Web apps because Google was the first one on market. If the time-to-market was shorter, perhaps Microsoft would have been the first one on many area of technologies, including web editable office documents. Sad but true, current Microsoft can just react innovations and trends created by other – most of the time.

It’s also interesting that Microsoft have changed its strategy toward more open. Firstly, it offers better support for non-Microsoft technologies like Hadoop, Oracle and Sap. Secondly, it have also open sourced many core technologies (like ASP.NET and Entity Framework). The reason to the second opening relates to Microsoft’s attempt to deliver the latest technological innovations before others. The reason why they broaden technology stack in the cloud is obvious: if you want more revenue from cloud services, you need to make it possible to deploy there whatever people need to make work and not just stuff made with Microsoft technologies.

Modern Application Life Cycle Management

The lecture was mostly on Next release of Visual Studio and Team Foundation Server. I have to say that that Microsoft have implemented damn cool new features. Also tools for portfolio management.

Lecture should be viewable in Channel9 within this week. Top 4 new features/announcements:

  • Probably my personal favorite improvement is codelings (I hope I heard the name right). Above a method signature you can see information relating it; e.g. in how many places it is references and who have modified it last time.

  • Another great improvement is git-support as, in my opinion, git is far better source control system than TFS. There’s nothing new in this. Anyway it was nice to see that they have improved the tools.
  • Ability to do performance testing from cloud is nice improvement. However, it is unfortunate that the site you test need to be publicly available. As a consequence you cannot use could-based performance testing for internal systems (without a hole on firewall or other similar security trade-offs).
  • The last thing I want to pinpoint is better support to continuous delivery. Microsoft have bought company called InRelease. By its product you can automate workflow from development to production view test environments. Currently the challenge in many big companies is that getting anything to production takes months. If you have an acute business need, waiting for months is big waste of money. I’ve been planning to build manually continuous deliver to customers for long. It’s absolutely great that continuous delivery is on Microsoft’s roadmap and I can throw my plans of manual continuous delivery to recycle bin. I’m expecting to hear a lot more on this later on this year:

As a critique: I dislike the way Microsoft emphasizes tools when it’s telling about enterprise agility. After all the biggest challenge is not technical but rather social. By proper tools and sermonizes you cannot get further than level 2 out of 4 in Agile fluency model and probably you won’t get even to the level 2. You have to change your mindset and the decision making process to make agile really work.

Earlier this month I watched video form TechEd North America, in which Microsoft was telling its own way of working. This stuff is far more interesting than plain tool demos: Deep Dive into the Team Foundation Server Agile Planning Tools (after 0:45.15).

I found especially interesting that Microsoft underlined that teams should be able to choose its’ own working method was it Scrum or Kanban or something else. Nice but it also is somewhat limiting, as you don’t have shared language up to the business values.

Microsoft ASP.NET, web and cloud tools preview

The presentation focused mostly on new tools for web development. The beginning of the presentation was slightly boring as the lectures read aloud his slides. The later part of the presentation consisted a lot of very nice demos. They was worth of watching: video should be soon available in Channel9.

I’m especially waiting for live refresh feature. After you save a file, browser automatically updates and shows the lates version. This actually is something Clojure developers have had for long. Finally Microsoft have stolen this great idea…

Do you have big data? (Most likely!)

I planned to attend lecture on Entity Framework 6: database access anywhere, easily. Unfortunately it was full, and therefore I chose the nearest alternative session form my potentially interesting sessions list. It happened to be on “Do you have big data? (Most likely!)”. Again the video should be available in Channell9 soon.

The lecture was mostly on Hadoop and Microsoft Hadoop service HDInsight. I was positively surprised how well Hadoop seems to work together with Microsoft tools, SQL Server and Excel. I’m absolutely sure that you still need to do some command line computer magic in order to make Hadoop work seamlessly with Microsoft technology stack. However the demos were promising. I definitely need to raise Hadoop a bit on my study list.

Advanced debugging of ASP.NET Applications with Visual Studio 2012

The last lecture on Tuesday was “Advanced debugging of ASP.NET Applications with Visual Studio 2012”. Lecture was good on content-wise. Unfortunately, most demos failed and presenter was not too good. Probably the video is not worth of watching. Anyway, I got big list of tools and technologies I have to study a bit later (none of them were completely new, but I haven’t seen demo on them earlier):

In addition using load testing tools as a helper tool for debugging was good idea.

Day 1 in TechEd 2013 – From DAX to SharePoint

I was near by the venue at 8am. However, it took approximately 30 mins to find exactly correct location. It’s weird how long it took from me to find an enormous conference center – or better the correct entrance. Later I found out that I chose wrong exit from Metro. If I had chosen different exit, it would not have been possible to miss.

After registration and an unsuccessful attempt to find cloakroom, I entered a seminar room. The subject in the room was “From 0 to DAX”. I’m satisfied. The lecture created pretty good idea what it is all about to write DAX – plus few good tip and tricks. I’ll try to fit a hands-on lab on the subject to my program. And if I manage to arrange some time in work, there is an Excel-report most people hate that might benefit from an upgrade to PowerPivot and DAX…

To be honest, I selected “From 0 to DAX” somewhat randomly. It was in the first room I found among the three tracks I found most useful. I had decided that if the morning’s lecture is not extra interesting, I switch the track on afternoon. So I did. I chose to spend afternoon on SharePoint installation and configuration track.

After afternoon’s SharePoint installation and configuration session, I remember best the lecturer’s (Todd Klindt) joke about SharePoint information dialog saying: “It should not take long…” He agreed: “It should not take long – but it does.”

If you ask from me the single biggest usability problem in SharePoint is that oftentimes it’s damn slow. Luckily improved caching makes it faster for many common scenarios. Yet, a power users cannot avoid such actions that are slow. The second biggest usability issue reveals that I’m a developer: the issue is the lack of proper statically typed, concise development model for common development task like for creating content types, web templates and provisioning files. Comparing to clever and modern convention-before-configuration -based frameworks like ASP.NET MVC and Entity Framework, it’s hard to believe that SharePoint’s tool-driven and clumsy feature framework is from the same company.

Back to the subject. The most useful parts of the lecture were probably the PowerShell scripts that makes installation and configuration a bit less painful. I’ll add link to the script here, once I find it. After criticizing SharePoint I have to say that I truly appreciated that you can do almost everything with PowerShell. I also liked the way lecturer managed slowly progressing installation and configuration: He kept on telling anecdotes, best practices and tip and tricks. I wrote down that database aliases might be useful e.g. in migrations and that you should use a white list to circumvent loopback check rather than turn it off entirely.

Related links:

TypeScript and Functional Programming

NOTICE: The latest versions of TypeScript support generics. Thus, this post is currently out-of-date. 

Few weeks ago I had a sort discussion on TypeScript in TechDays 2013 event. In my opinion, TypeScript is one of the coolest things Microsoft announced last year. In short it allows you to write typesafe JavaScript by using the most of the goodies included in forthcoming ECMAScript 6 (ES6) -standard. TypeScript code is compiled into JavaScript, but thanks to the type system compiler and IDE helps you a lot. It is noteworthy that Visual Studio is not the only supported IDE, this video demonstrates IDE support in Sublime Text 2.

Seemingly there is a myth that TypeScript delimits expressiveness of JavaScript and that TypeScript type system restrict you from using functional programming techniques. It’s not quite true. However, if you prefer functional programming (FP) paradigm over object oriented paradigm (OOP), you cannot always use efficiently TypeScript type system. At least not yet. According to Anders Hejlsberg support for generics is in roadmap.

TypeScript and functional programming

Douglas Crockford summarized that JavaScript is LISP in C’s clothing. JavaScript is a functional programming language in that sense that functions are truly first class citizens. The same is true for TypeScript. E.g.

var sum = (a, b) => a + b

creates object of type (any, any) -> any. If you want to make it typesafe you can write:

var sum = (a:number, b:number) => a + b

Now sum is type of “(number, number) -> number”.

TypeScript provides some convenience for functional programming in addition to plain JavaScript. However, currently it’s support for functional programming paradigm is rather limited; or better, it doesn’t provide type safety in many FP scenarios.


Consider following simple example: I would like to have F#-like fold higher-order function in TypeScript. In F#, fold’s signature for lists is ((‘a -> ‘b -> a’) -> ‘a -> ‘b list -> ‘a.  You can calculate sum of a list of numbers as follows:

[1;2;3] |> List.fold (+) 0; 
// returns 6

Currently the best match for fold-function is ECMAScript 5’s (ES5) reduce. You can use ES5’s Array’s reduce as follows.

[1,2,3].reduce(function(accumulator, current) {
   return accumulator + current;
}, 0); 
// returns 6

As TypeScript implements ECMAScrip 5 standard you can use it also in TypeScript. It’s signature is (callback: (previousvalue: any, currentValue: any, currentIndex : number, array : T[]) => any), initialValue:  any) -> any. Even if the array is typed, it is not typesafe.

Can you create a nice typesafe version of Array’s reduce by TypeScript? No. Either you have to compromise type safety or generality:

function fold(
   accumulator : (state: any, current: any) => any, 
   seed : any, 
   data : any[]){
   // Implementation that compromises type safety
function fold(
 accumulator : (state: number, current: number) => number, 
 seed : any, 
 data : number[]){
 // Implementation that compromises generality

Basically, there is no elegant way to implement typesafe higher-order functions in TypeScript. Yet, TypeScript makes calling reduce function a bit more convenient:

// returns 6. 
// Note: Just like in ECMAScript you don't need to use all parameters 
// of the callback.

This is still somewhat clumsy if you compare this to, for instance, LiveScript. In LiveScript, you can say:

[1 2 3] |> _.reduce _, (+), 0

To sum up

TypeScript does allow functional programming and it is a bit better for that than plain  JavaScript. You have first class functions, closures and lambas just like in plain JavaScript. In addition, you can write typesafe functions. However, there are limitations in the current implementation of type system. Oftentimes, you cannot fully utilize type system when using functional programming patterns. Higher-order functions was just one example of that. I could have chosen any other basic pattern in functional programming, but for sake of simplicity I chose fold as an example of higher order functions.

Current version of TypeScript is designed for object oriented programming (OOP). It provides only marginal benefits if you want to use functional approach. What I love in TypeScript, however, is that you don’t need to do binary decision between the assurance created by type safety and the expressiveness of functional programming paradigm. There are many cases, in which you can have them both, even if oftentimes you have to compromise type safety in order to use FP-patterns and vice verse.

Then again, if you want to write functional code rather than object oriented, there are better alternatives. Especially, LiveScript is very promising. Once Microsoft implements support for generics, the situation may change. After all, type safety is a great benefit when writing big applications – no matter what is the used programming paradigm. In the perfect world, someone would combine TypeScript’s  optional type safety with LiveScript’s syntactic sugar for functional programming.