Craft conf, 2017, Day 2 – Impressions cont’d

April 28, 2017

Following my previous post, Day 1 impressions, let me share what I learned today on the second day of Craft conf 2017.

  • The first presentation I started with was about HTTP/2. Daniel Stenberg is from Mozilla and he’s been actively working on HTTP/2 in IETF. That sounds like a good start to hear an interesting presentation – and I was right. We were told some highlights on the evolution of HTTP, THE web transport protocol from HTTP/1 through 1.1 to HTTP/2. We learned that the major changes that HTTP/2 brought were multiplexing, the use of compression in headers, the protocol transforming to binary and finally the feature of server push. Multiplexing enables sending multiple requests on the same connection without having to wait for the first to complete. This makes communication an order of a magnitude faster than its predecessors. It was very interesting to learn that even though HTTP/2 is only a few years old, it’s already being replaced by QUIC, Google’s own protocol which is supposed to solve the problem of packet losses, i.e. if a single packet is dropped, then it may block all streams (this problem was introduced by HTTP/2). QUIC being specific to Google, effort is being spent on standardising at IETF.
  • The second presentation was about mutation testing. I must admit two things here: I have never heard about this testing method and one of the reasons I chose this presentation was because the presenter was working for SAP Hybris – the company whose product I’m using on a daily basis. Well, I didn’t regret to listen as I learned what mutation testing was about: to evaluate the quality of testing (who tests tests?). On the other hand, it could have been squeezed into 20 minutes or so and it had nothing to do with hybris itself. Not as if it had been promised, yet I was secretly hoping in it. 🙂 Nicolas Fränkel was using PIT and demonstrated how false positives can be identified more easily, i.e. when you write tests and even though they all pass, they’re still wrong. It’s so easy to cover code with tests just for the sake of better code coverage, but we shall never forget that we don’t write tests just for better stats, but ultimately for better software.
  • I was truly happy to attend the presentation of Sam Newman who shared his confusion about serverless computing. As it’s just becoming the next big thing, it’s important to understand what’s beyond the hype. I recall the first time I heard about serverless it was literally one year ago, when Adrian Cockcroft gave a speech right about this topic. Now that Tim Wagner’s presentation was dedicated to serverless, it’s a blessing to see “the other side”. Sam started with the name, serverless, and that how confusing it can be to people – as servers are and will always be there. He compared different variations of *aaS (IaaS, PaaS, FaaS, BaaS, CaaS) and concluded that serverless should be PaaS at least. The implication of infinite scaling of computing resources (for functions) is that soon the data store will become a bottleneck – unless that will scale as well (or simply, we reject further incoming calls e.g. via circuit breaker). He also mentioned two interesting aspects:
    • Vendor lock-in, which we shouldn’t really be afraid of. He suggests not to think about lock-in, rather the cost of migration. That is, the question really is: you pay now or later, i.e. invest in introducing an abstraction layer on top of the vendor (e.g. AWS) or pay more for migration when moving away from the current vendor.
    • How serverless devalues things like agile, devops and even micro services.
  • I think Cornelia Davis has a very lovable presentation style. She’s very smart and hands-on and gave a great talk on what to expect from PaaS. Yes, I know she’s from Pivotal, yet she could deliver her thoughts without me feeling it was a pitch AND those things I heard were really useful (and actually quite timely for me). She first listed couple of obstacles from silos between dev, qa, prod through risky, manual deployments to changes treated rather as exceptions than the rule. The secret sauce of a successful PaaS includes things like using a single deployable unit, which is promoted across environments without any changes and stays immutable even in production. The ability of self-healing is also very important, when one thing is constant: the fact that things change. Environment parity must go away, self-service should boost agility and frequent deployment must simply remove the fear from change (and actually embrace it).
  • I must admit I had expected a very different presentation from Phil Calçado, but I didn’t regret the one I attended. Phil is from DigitalOcean and he talked about the economics of micro services. I learned two very important lessons from Phil today:
    • If an organisation decides to start breaking up the monolith and moves to micro services, it can stop any time somewhere in between – it even had a name, called microlith. It doesn’t have to finish with a “full transformation” of the architecture, but the status quo is specific to each organisation. Phil suggested to start with experimentation first, create checklists and standards next, copy-paste, find/write libraries and tools and end up having a platform. Now, companies may never reach the fourth stage, yet it might still be perfect for them.
    • This is the first time I heard about the inverse Conway’s law, i.e. when the architecture drives the organisation structure, not the other way around. An org must be really flexible to allow to be driven, though, yet it may work.
  • Finally, the last presentation I listened to today was about ChatBots from Jason Hand. The concept is so simple, yet brilliant: get an instant messaging tool (platform) and give instructions to a bot to do certain things, typically related to devops. Jason shared an interesting story about an Ops guy working for GitHub, who got an alert on a DDoS attack while he was on his way to the office. He just said: “Hubot, shields up!”, and the problem had gone away by the time he arrived at the office. He pointed out so many advantages of ChatBot:
    • Social: collaboration, knowledge transfer, make work visible, shortened feedback loops, etc.
    • Technical: automation, speed, system of records (ChatBot having an audit trail, etc.

As systems are typically non-linear, we often don’t have a root cause for a problem, but probably the distribution of causes. As such, ChatBots may be very handy in fixing issues and generally giving a boost to productivity.

Well, I got tired a little bit by the end of the conference – two days were just enough. I found that serverless has replaced micro services in popularity, but not entirely. Say micro services one more time has although passed the hype, it’s still everywhere. And I was so glad that I could attend presentations outside my comfort zone/narrow focus area – this is really important for me to see that there’s life out there. 🙂

Hope you enjoyed reading, please share your thoughts!

Thanks,

Gabor

Craft conf, 2017, Day 1 – Impressions

April 28, 2017

Hi all,

Another year has passed and Craft conf is here again! I was so excited to attend the conference as it had always given me new thoughts, allowed me to recharge and meet new people. It’s now the 4th time for me to attend the conference, actually since the very beginning, and even though it naturally brings less new information as initially, I still do enjoy it.

Let’s jump right into the middle and share what I’ve seen today on the first day:

  • Tim Wagner’s keynote presentation was awesome! It was about serverless computing, which has become pretty hot recently. He talked about event-driven architecture, shared specific use cases and even show cased an end-to-end demonstration from code to running serverless app in live. The last 10 minutes or so wasn’t so perfect, though. Based on the real-time feedbacks some people apparently thought that it was more like an AWS pitch, which shouldn’t have happened on a keynote, but with due respect I disagree. Okay, it was a pitch, but it was about AWS and serverless, and one cannot take credit from Amazon pioneering on potentially the next big thing. It’s that simple. I respect that Tim was prepared to answer seemingly unpleasant questions such as when he would NOT recommend lambdas, apparently though he wasn’t that much prepared for the demo as that was the time when he started to massively lose audience.
  • Dan North is back, as always. His presentation, Decisions, Decisions didn’t impress me, I must admit. The basic message of “we always make a trade-off” is much of a cliche to me that it’s not worth building a presentation around it. The problem may be really with me, but I didn’t find anything useful in it.
  • Randy Shoup’s Effective Microservices in a Data-Centric World was so cool! He’s a VP Engineering at Stitch Fix, whose business model just impressed me: they ask you to fill a survey analysed by stylists and deliver clothes to you. If you love clothes in the package, you buy them, but those you don’t you can simple return. The same amount of data scientists work at Stitch Fix than engineers, which is unbelievable ratio. With this brain power, they do inventory management, machine learning, algorithmic recommendations, etc to make sure you always get what’s the best for you. Randy is a firm believer of TDD, continuous deployment and the use of micro services and shared a couple of hands-on hints with us. They started with a monolith DB, where all apps were accessing shared data in the same DB. They gradually decoupled data into their own respective services responsible for serving requests for the data they own. He talked about the use of event sourcing, different approaches to database joins across segregated data and the challenges with transactions in the world of micro services. Definitely worth watching!
  • The build trap from Melissa Perri was so enjoyable! I really love Product Management and the fact that such topics are discussed on a conference full of engineers. The build trap is the ever-accumulating product backlog with features potentially no-one wants. If we don’t measure what real end-users really appreciate and don’t solve their problems, then simply we will not get back our return on investment. By real end users, Melissa didn’t mean a test target group in your company, but your real customers. Engineers and other creative folks must come to meet customers so that they understand what they need to be making and why. These user survey will never be too costly: delivering the pointless is costly. There is a break in the communication between managers and the Team: the team doesn’t understand/know the vision of the company and managers don’t pay attention to the challenges the team is struggling with. The vision must be communicated through measurable and achievable objectives to be reached in a set time frame. Another point on who is creative: I’m just starting to realize that one of the ingredients to successful product management is creative people, no matter where they come from. Whether they’re creative in visual design, software design or just simply can think out of the box – it’s equally good. For that reason, we must not stick with either software engineers or UX designers saying the ultimate truth, as it must be a joint effort.
  • Coincidentally, Jeff Gothelf’s talk on Scaling Lean: Principles over Process was a logical continuation of Melissa’s I discussed right above. Jeff made a survey on Twitter about why large companies have scaling issues and although feedbacks included heavyweight processes, bureaucracy, silos between disciplines and teams and finally the general worry about brand, he made the conclusion that it’s often principles what is missing. He had collected a list of principles and shared them with us along with tactics we can use to achieve our goals and keep ourselves to those principles. Let me share only the principles with you, because they’re so great and right to the point that I simply can’t resist. I sincerely recommend you, however, to watch the presentation and learn from it. Here it goes:
    • Principle #1: Customer value = business value. If we make the customer successful, we’ll be successful. You must manage objective key results that are both qualitative and quantitative (measurable), inspirational, time bound and actionable.
    • Principle #2: Value learning over delivery. Let teams pilot things, encourage them to be experimental. Of course, you must carefully balance between experimentation and delivery, still learn, learn, learn.
    • Principle #3: Radical transparency. Transparency brings trust and it doesn’t go only internally, but following Melissa’s advice: go and reach out to your customers. I love the quote of the day: “You decide what is minimum, but customer will decide what is viable”.
    • Principle #4: Humility in all things. I love this, it is so true! Don’t assume you know what to build, bot go and figure it out. Value real roles and people instead of job titles. Talk and talk to both external and internal stakeholders.
  • The last presentation I attended during the day was from Adrian Mouat, Chief Scientist at Container Solutions. Adrian is the author of the book, Using Docker, as such, he was talking about deployment techniques with micro services. I was there primarily driven by a “how is it going with containers these days?” interest and I think it was worth it. One of the main conclusions I drew from his talk is that this space is still somewhat immature as companies typically use bespoke, internal solutions mainly using either Kubernetes or Docker Swarm. Adrian took the most typical deployment models like Blue/Green, Canaries, ramped deployment and feature flags and shared some insights on the biggest issues, pros/cons for each from containers’ perspective. It was very useful to get a high-level view on these challenges, but – as he suggested – there are still such major topics to deal with as API versioning, database states, monitoring, more tools and patterns, etc.

 

Hope you enjoyed this post, please let me know what you think!

Best,

Gabor

Key take-aways from Amuse & Crunch conferences, Day 1

October 6, 2016

Autumn has come, it’s time for another conference. It’s Budapest again, this time it’s Amuse and Crunch. Amuse is about User Experience (UX) and Crunch is about Big Data (BD). The two are being held together as probably neither would attract a big enough audience and equally importantly the organisers are the same. So far so good, I’m interested in both.

I’m, as a solution architect, generally not involved deeply in either, but since I’m interested in product management, too, I would like to have a high-level overview of both. Which puts me in an interesting position: while my colleagues are complaining that some presentations are not deep enough, I generally benefit from most of them as they widen my spectrum well. For example, when telling my colleagues who happen to have BD expertise that I’m attending a presentation on how to build up a UX team from scratch, they laughed out loudly that I’m going to a CSS pres. Ridiculous folks, I know. 🙂 Still, at the same time when we met after the presentation, I was happy that I attended and they weren’t satisfied with theirs. Whiners.

Actually, I often take an unconventional approach when attending conferences: instead of “playing in the safe zone”, I go and listen to such presentations that are out of my comfort zone. And even if I grasp only the highest level and distill everything to a single sentence, I believe it was worth it.

It’s not the first conference from these organisers that I’m attending now. And one of the things that I like pretty much and the use of sli.do. I really like the way they make presentations interactive via technology: you just post any questions during the presentation and if it gets enough votes it’ll be answered by the presenters during the Q&A part. Very nice. Still, I’ve learned two lessons today:

  • I attended a presentation on data visualisation, which was very interesting. The presenter was a technology evangelist from Tableau and, unsurprisingly, demonstrated the capabilities of the tool very effectively. I quickly checked the price rate of the tool and found that it was very high: between $1.000-2.000 for average people, like me. I asked the question via sli.do if they were planning to open for the masses via lower rates and to my biggest surprise I got moderated. My question initially showed up in the list of questions for a short period, it had even received couple of up-votes, but when it came to answering the questions, all of a sudden it just disappeared. I think my question was a valid one, it was even asked in a polite way, but it seems it wasn’t politically correct. Oh, my.
  • The next story is about the mechanism of up- and down-voting. The presentation was about UX @ LEGO and I liked it pretty much. It was about how to build up a UX discipline & team at a company, like LEGO. I asked the following question: “How do you time-box creative people?”. I deeply believe it’s a valid question. It got ranked as the 2nd most popular question among all. And then I saw it declining: it was liked by 5 voters over time and after a few minutes only by 2. That was the time, when I realised that down-votes work against up-votes. I became the 3rd most popular question and chances were that my question wouldn’t be asked during Q&A. I quickly down-voted the second most popular question (shame on me), after which my question became the 2nd most popular again, but it was too late: the moderator eventually asked the other question and there was no time for a third one. Can you guess what was the 2nd most popular question? “Do you use LEGO at work?”. Grrr ….

But before the first day of the conference, I had attended a workshop yesterday, which was about Lean Analytics. It was AWESOME. Was held by Ben Yoskovitz, who’s the author of the book with the same title and the workshop was full of hands-on insights as well as theory. I couldn’t have imagined a more effective way of learning about product management and analytics. In one sense, it could have been counter-productive as I thought it wouldn’t be worth buying the book after this presentation, but on a second thought I think I really will buy it: I just can’t miss this knowledge from my book shelf. Such a great day!

Then, key take-aways from me from today. Warning: absolutely subjective, but hopefully still informative:

  • The presentation from Andy Cotgreave @ Tableau was very inspiring in the sense that we must really go beyond showing raw numbers and “first-instinct diagrams” if we want the audience to quickly grasp and remember of what we really want to say. His example of Iraq’s bloody toll was really interesting and shocking at the same time: the creator of the chart played with colour, direction of chart bars and title. Also, Tableau seems to be a very powerful tool to achieve this purpose, although the price is fairly high if you just want to get familiar with it.
  • Dan McKinley from Etsy revealed some insights from data analytics and how it drove business when he was working for Etsy. He shared how easy it was at the early days to think that cool ideas will surely generate more business, but only when they started to measure they realised how far that was from truth. It’s useful to know how much contextual knowledge counts when optimising for conversion as you won’t follow the same strategy for low-cost gadgets versus relatively high-cost furnitures. The most memorable sentence for me was still this one: “You must never assume Product Managers will fully know what they’re asking.” Nicely put.
  • Laurissa Wolfram-Hvass was talking about researches done @ MailChimp. It’s the little(?) things that grabbed my attention the best:
    • Usability lunches – free food attracts everyone and it’s a great opportunity to unleash creativity.
    • Visit & film customer offices and use this in the material to present them how much you understood them and their business.
    • Customer panel is a great tool to get your most influental customers at the same table and hear how they’re using your product and what they suffer from the most.
    • Encourage everyone at your own company to do research for the company’s overall benefit.
  • Marton Trencseni from Facebook was talking about data science. They do gather metrics mostly about growth and engagements, of course at an unthinkable scale. They do gather metrics at a very granular level and cut data per access interface: all – mobile – iOS – Facebook for iPad is just a single path among all. What I liked the most, though, was the “counter metrics matrix”: they set a target metric (e.g. increase Daily Active People) which they compare with an another metric, which is not necessarily correlated (e.g. # of support tickets). They do different actions depending on how these metrics change:
    • If DAP goes up and the # of support tickets remain, then they keep the new feature.
    • If DAP goes down and the # of support tickets goes up, they decide on a case-by-case basis.
    • If DAP remains and the # of support tickets goes down, they keep the new feature.
    • etc.
  • Janne Jul Jensen from LEGO was talking about setting up a UX team from scratch. The biggest challenge wasn’t necessarily the team, but the fact that the discipline was brand new in the company. I truly understand this as I’ve seen it a number of times at my company. It takes a great effort to find your own identity, i.e what UX is at LEGO, and communicate that consistently and repetitively. Also, you must educate people to get rid off misconceptions of a new discipline. Hats off, this must have been a great effort, and I can tell it’s by far not over.
  • Mike Olson is an industrial veteran, who co-founded Cloudera. He shared a lot of insights not only from the past, but also put vision on the future of Big Data. All of his examples revealed deep insights and forecast a more advanced future relying on technology and BD processing in particular, still the most moving part was when he was talking about technology serving people in healthcare: either as employees or patients. The two examples of analysing the activities of newborns real-time and doing predictive analytics for regular patients to prevent from sickness and staying in hospital are very useful applications of technology serving humans.

Finally, I liked the after-party in a way that the food, the drinks and the music were all just right. But it’s not the first time that I found that I cannot effectively do networking. And it’s probably not just me: IT guys like me will just simply not go to the other to ask “hey, Dude, who are you and what are you up to?”. Maybe it’s not about my profession, nor my personality. It was the same at QCon in London this Spring: people mostly talked to their colleagues or some folks they had previously met. The next big challenge for the organisers of any conference: how to get complete strangers together to meet, discuss and enjoy conversations with others?

Hope you enjoyed reading this wrap! Any thoughts from your side?

Best,

Gabor

Payment Card Security Standards – Do you really want to bother with it?

June 15, 2016

When building an e-commerce system, one of the most severe security concerns you have to deal with is PCI compliance for payment. PCI stands for Payment Card Industry and, as you can imagine, they have quite a few standards to comply with, of which DSS (Data Security Standard) is only one. It’s quite typical that you integrate with a payment service provider not only to “outsource” the functionality of paying with all kinds of cards, but also the pain it means to deal with such sensitive data as credit card secret code, primary account number, etc. Wherever I’ve looked at, I always found that organisations didn’t implement their own systems to comply with PCI DSS, rather they chose a vendor whose product they could use for this purpose. But, as a result, I didn’t know what is really the pain that they want to avoid. So I checked it out by myself!

I visited the official PCI Security Standards’ web page and downloaded their guide from the documents section, which is essentially a quick reference guide. I learned a lot of things even from this brief and thought you might also be interested in what I found. The list below is just an excerpt, doesn’t even attempt to be exhaustive, the main purpose really is to describe the type and amount of requirements one may face with should he want to implement a system that stores cardholder data according to the regulations defined by PCI Security Standards Council.

You must build and maintain a secure system, related requirements are

  • Installation of firewall. You must control ALL connections in and out and there must be business justification of authorised access. Prohibit access from public internet to your system that stores card data. Use personal firewalls even on those machines that will access your system.
  • Obligatory change of vendor-supplied passwords. ALWAYS change all passwords and remove any unnecessary default accounts. Always keep your system up-to-date whenever a new vulnerability issue is identified. Use strong cryptography, encrypt all non-console administrative access.

With respect to protecting cardholder data, the document

  • Shows an easy to understand table as to which card data element should be treated how. For example, storage is permitted for Primary Account Number (PAN), Cardholder name, Service code, Expiration date. But it is forbidden for Full Track Data (from magnetic stripe, chip, etc), CVV2 (and other three- or four-digit values printed on the back of the card) and PIN.
  • Take care of data retention policy and purge unnecessary data at least quarterly.
  • Use data only for authorisation and store it only if you really need it (=have business justification).
  • Mask PAN when displaying, render it unreadable when storing.
  • Implement procedures to protect any keys used for encryption from disclosure and misuse.

Encrypt transmission of data and

  • Use strong cryptography and security protocol for safeguarding especially if data goes over open, public networks.
  • Never send unprotected PANs.
  • Make sure policies and procedures are documented and well-known by everyone.

Protect against malware and viruses.

  • Deploy anti-virus system on all systems, keep it up-to-date and perform periodic scans.
  • Make sure this protection cannot be disabled unless specifically authorised on a case-by-case basis.

Develop and maintain secure systems and applications, where you

  • Regularly identify security vulnerabilities and apply vendor-supplied security patches.
  • Develop software in accordance to PCI DSS and apply related practices throughout your development life cycle.
  • Ensure all public-facing web applications are protected against known attacks by performing application vulnerability assessment after any changes.

You must also implement strong access control measures including limiting access to cardholder data only to such personnel, systems and processes that really need to know related data and according to job responsibilities. Use “Deny all” policy by default for any access.

Use proper authentication, where you follow these guidelines:

  • Implement proper user identification management for users and administrators on all system components. Assign all users a unique identifier for good trackability.
  • Employ at least one of these to authenticate all users: something you know, such as a password or passphrase; something you have, such as a token device or smart card; or something you are, such as a biometric.
  • Use multi-factor authentication applying at least two of the above three methods. Hint: using one factor twice (e.g. using two separate passwords) is not considered multi-factor authentication.
  • Only database administrators can have direct access to data, all other users must access cardholder data through programmatic access.

Restrict physical access to cardholder data, which includes

  • Facility entry control.
  • Distinguishing between onsite personnel and visitors, give only temporary access to latter.
  • Controlling physical access even for onsite personnel to sensitive areas.
  • Physically secure all media, store media back-ups in a secure location, preferably off-site.
  • Maintain strict control over internal/external distribution and storage and accessibility of any kind of media.
  • Destroy media when it’s no longer needed.

Track and monitor all access to network resources and cardholder data by

  • Implementing audit trails – record audit trail entries for all system components for each event.
  • Secure audit trails so that they cannot be altered.
  • Perform critical log reviews at least daily.
  • Retain audit history for at least one year.
  • Use proper time synchronisation technology.

It’s worth noting that while the PCI Council is managing the data security standards, each payment card brand maintains its own separate compliance enforcement programs. As such, depending on which payment method you choose, your solution must comply with (somewhat) different requirements set by such card brands as American Express, Discover, JCB International, MasterCard, Visa, etc. In order to get certification on compliance, your solution must be assessed by a Quality Security Assessor and scanned by an Approved Scanning Vendor – PCI may help you to choose one. In addition, you may also run your own internal assessment prior to the “official”, but for that your professionals must become PCI DSS Internal Security Assessors.

Finally, the scope can be reduced with the use of network segmentation, which isolates the cardholder data environment. This may also reduce the scope and lower the price of the assessment.

As a conclusion, I can see that it’s A LOT OF WORK that needs to be done to handle cardholder data. It’s not a magic if you look at the individual requirements, but requires quite a big of investment upfront and afterwards. Now I understand it much more why lots of organisations work with certified payment service providers instead of implementing a custom solution on their own.

Hope it was a useful read for you!

Best,

Gabor

Take aways from Craft Conf 2016

May 4, 2016

This was the third time for me to attend Craft Conf, a conference for software engineers, in Budapest, Hungary. Back in 2014 I was surprised to experience that I don’t have to go abroad in order to attend a conference with presentations at high quality. This year it wasn’t different, either.

The venue, again, was peculiar: the conference was held in a railway museum. A picture tells more than thousand words.

Yes, there it is. There were simultaneous talks on multiple stages for 2 days – it was very dense, my head was so full by the end of the conference that it took 2 hours of a whole back-drive to home until I could talk to anyone intelligently.

You can find the whole agenda at http://beta.craft-conf.com/schedule as well as recorded videos at http://www.ustream.tv/craft. What I would like to talk about here, though, is my take aways. Obviously, there were more take aways and others took other things away, so don’t be surprised that this list will be subjective. I still hope you’ll enjoy reading it. So, let’s start!

The most important thing to mention is that micro services are everywhere now. They surround us and it seems to me that anyone not doing micro services must be doing it wrong. I mean this is the message, which I don’t accept blindly. For this reason, I really enjoyed the talk given by Matt Ranney on what he wished he had known before scaling Uber up to 1,000 micro services. He started his talk by saying that everyone else seems to be talking about what went well and now he’d like to cover what went wrong – this is exactly what people can learn the most from.

Besides this, I could also talk to Adrian Cockcroft on why micro services are coming even from the tap. He told me that there’s still a buzz about it in Europe, however, in the US and the Silicon Valley in particular he wouldn’t give the same talk anymore as the people whom he would be talking to are already doing it. He mentioned that micro services are not going to be a hot topic soon, similarly to SOA and cloud, for example. As to what he saw would be the next big things he mentioned serverless architecture, terra services and persistent memory. By the way, he also talks about this topic in a podcast available on InfoQ.

Then, the following three talks fell in the same bucket for some weird reasons:

The presenters all talked about software engineering and the presentations were for software engineers about software engineers. Let me elaborate this! Charity explained what she believes effective operations require and after describing (“admiring”) the abstraction skills most developer have she literally gave a middle-finger to them pointing out that developers should come down from the cloud of abstraction to the ground of reality and own their code even in production. Good point. Jeff also made the point that UX and developers should talk more so that the overall result will be what they were both thinking. Finally, I loved Marty’s presentation about Product Development (with capitals). His previous talk from last year was also very inspiring to me. He highlighted the keys to success and how “upper levels” (C-level, product managers, etc) of product management should collaborate with people at “lower levels” (software engineers, QA, designers, etc). The essence of these talks were to me that these teams should form a cohesive unit that go the same direction via rapid iterations that feed back to upcoming iterations. Very, very useful!

Another enjoyable moment was for me that I could talk to Martin Fowler as well! I respect his work so much. Nevertheless, his talk about Architecture without architects wasn’t so convincing to me. It’s not because I am an architect and he kind of questioned the necessity of my role – I’m not too concerned about that, because I DO see the necessity of this role. And because I can see it day-by-day, I thought I would catch up with him on this topic after his presentation. And so I did. My arguments were as follows: you can argue that developers can do basically all the work an architect does today, it’s a too simplistic statement for the following reasons.

  • Being an architect requires soft skills, being a very good coder is not enough. You often mediate between technical and non-technical people and simply you must have the skills for that.
  • You capture non-functional requirements, simply because no-one else does. When you’re a coder, you code. If you’re a business analyst, you mostly talk about business – besides, I often find that BAs don’t feel confident when talking about non-functional requirements. As such, there must be someone who both understands technical aspects of a product and is able to make the connection between the engineering team and business stakeholders. This is an architect by my experience.
  • You document upfront. I understand that a significant part of system documentation should be a live thing, for example, infrastructure could be described via Infrastructure as Code scripts. I would argue that a diagram is more useful than a script and I’m not sure if there are available tools that can generate infrastructure diagrams out of e.g. Ansible, Chef scripts. Anyway, such tools may come to the market quickly, my strong argument is not even this. It’s that it’s a re-active approach to documentation: first you code, then you document. In lots of cases, though, you must follow a pro-active approach, i.e. document upfront.

All in all, once you take such things into consideration as above you realise that even though you may call this guy a developer, in fact what she does is anything, but coding. And then … why not call her an architect instead? Is it odd to say that I was a little disappointed hearing it from Martin that I was right? He basically said he had thought of such examples as architects giving instructions to developers without having prior experience in a given technology stack. I do agree with this as I’m a strong believer of an unbroken feedback loop between architects and developers – mind you, in both directions.

Finally, Antonio Monteiro‘s presentation was an eye-opener to me. He talked about Om Next in his talk, and the reason I sat in for the presentation is that I wanted to hear more about consumer-driven API design. Being more experienced on back-end development (than front-end) I had to realise that RESTful API design is simply not suitable in a number of cases. One thing I’d already heard about previously: when doing micro services, HTTP and JSON are simply not efficient enough. One alternative for them, for example, is Google’s Protocol Buffer (for JSON) and gRPC (for HTTP/2). The use case, however, Antonio was referring to is more pragmatic: as the network bandwidth is not improving in the same pace as CPU, memory, storage, etc. sometimes we can’t afford placing multiple calls to different REST resources just to render a page. Especially, when it comes to mobile devices that must rely on often unreliable network. Instead, we must speak to such a back-end that accepts our request to a number of resources, typically described in a declarative way. And that’s where such frameworks come as Facebook’s Relay, Netflix’s Falcor and Om Next.

Overall, I enjoyed this conference very much! It gave me inspiration, filled some gaps on one hand, created others on the other. Looking forward to the next one in 2017!

Best,

Gabor

Non-functional requirements – The unknown stepchild

April 13, 2016

I realised that a lot of people in the IT industry don’t know much about non-functional requirements. Actually, I tend to believe that only a very few DO know something about their significance. And it’s not right, you know why? Because it’s too easy to mis-manage expectations then. We’re so narrow-minded to see only the functionality that we think DOES matter to us that we easily forget about anything that paves the path to it.

This became much clearer after I attended the training, Software Architect: Principles and Practices, held by SEI. They put so much focus on quality attributes (synonym for non-functional requirements or NFRs) that they say: it’s the quality attributes that define the architecture not the functional requirements. And you know what? It was an eye-opener for me.

Let’s take a simple example, a single functional requirement:

As a Content Owner I would like to publish posts on my web site so that I can share my thoughts with anyone on the Web.

Now, who & what tells you HOW to do it?

  • Compatibility tells you which framework should be used to incorporate the new feature into.
  • Maintainability declares that industry best practices, design patterns, prescribed development KPIs must be followed and met.
  • Supportability prescribes what needs to be met in order for the function to be easily supported in production.
  • Usability hints how the above Post flow should work so that it pleases users the most.
  • Security advises how to treat sensitive data both in transit and at rest.
  • Performance gives you SLAs that the solution must meet.
  • Reliability defines what you can expect from the feature in exceptional cases.

I’m sure you realised that the bullet points above are all non-functional requirements and it’s not even a full list. Without NFRs one could implement the new feature in many ways, which is not a problem at all, yet quite a few of those would be unacceptable by the customer simply because their un-told & un-captured expectations wouldn’t be met. Non-functional requirements, thus, define the architecture, which in turn defines the team structure, cost, schedule, etc.

But, if NFRs are so influential, then they must be well-known, right? Yet, they aren’t. And that’s where the title, unknown stepchild, comes from. They’re treated as if they weren’t important (stepchild) and as a consequence they’re not captured and elaborated properly (remain unknown).

I talked to many Business Analysts, who capture requirements in general, and I tend to believe that most do not feel confident dealing with NFRs. The primary focus of these guys is the business domain and things like 99.99% service uptime availability or secure storage of personally identifiable information are too technical and as such too far from them. But if they don’t capture these requirements, who will? I think the answer should come from Solution Architects. People in this role should be able to talk to business, anyway, so they must be able to both challenge business from technical perspective and present them alternatives and help them to choose the right one for the given purpose.

One more thing noteworthy about NFRs: they must be carefully written so that they are measurable. If you’re not able to measure it, then how can you tell if it’s been fulfilled, right? It’s both important during development to test and accept the implementation AND in production so that you can continuously monitor it. Similarly to functional requirements one must list a set of criteria that must be met:

  • Personally Identifiable Information MUST NOT be stored in the browser’s persistent storage.
  • The web page MUST load fully within 5 seconds.
  • The web service MUST fully and automatically recover within 15 seconds after the recovery of its downstream dependencies.
  • Feature X MUST be available in English and German.
  • An alert MUST be sent to e-mail address X when error Y occurs 10 times within 1 minute.

Actually, above is not specific to non-functional requirements, yet I see it to happen so often that I emphasize it here, too: the way to test a requirement must be clear, tangible and accepted by everyone.

Do you think it makes sense? Looking forward to hearing your opinion!

Best,

Gabor

First encounter with Hybris

April 4, 2016

As a solution architect, I’ve been mostly on the technical side in the past. I had no problem with this, as I do like technology, but right because I’m an SA, I’ve got to see the business side of the problems, too, in order to solve business problems instead of technical. It doesn’t work like that “you do this and that” without any explanation and seeing the big picture – I’m just simply not that type of person.

When I first heard about Hybris, it was on an internal meeting. I felt I must check out what this name meant and what I found was more than what I expected. It’s an e-commerce framework written entirely in Java. I didn’t know anything else about Hybris at that time, but even THAT was enough for me. It’s not the Java part that grabbed my attention, but the other two words: e-commerce and framework. Why this was exciting to me is that it suggested to give an answer to a more holistic problem or business domain: e-commerce. And it’s a whole framework, giving the hint that it’s more than just a tiny piece of an entire solution. Today, many formats of commerce surround us and the presence of e-commerce is becoming so much dominant that it was exciting enough for me to jump on the bandwagon.

What you should know about Hybris is that its root is from Switzerland and Germany and Forrester had found them to be one of the key players among the likes as IBM and Oracle – even before SAP’s acquisition in 2013. It’s been called SAP Hybris since then and SAP’s offering is much stronger with Hybris as one can imagine.

I’ve spent roughly a year by now with Hybris and my experience is mixed. Fundamentally, I’m impressed by the architectural foundations of the platform. It’s not perfect and as of today it’s still a monolith, but it brings such core values to the table that differentiate them from other, more average solutions:

  • Robustness and design clarity – These guys DO follow design principles. This benefits them in the following areas:
    • It’s easy to understand their design for anyone educated in software engineering.
    • They’re using proven, time-tested design patterns, so probably their software will just work as expected.
  • Extensibility – Unsurprisingly, the OOTB Hybris platform will not solve real-life business problems without any customisation. These customisations typically fall into the following categories:
    • Data model – Extend/define product & content catalog, specify new promotions, subscription types, etc.
    • Functionality – Plug new features in the system and/or replace existing functionality, such as price & tax calculation, sourcing of products in warehouses, enable online payment, integration with an inventory management or fraud detection system, etc.
    • Configuration – Specify prices, regions, languages, user groups, etc.

The point is that Hybris offers a great deal of opportunities for these customisations and their system is quite flexible in this respect.

  • Very rich feature set – The typical four parts of the e-commerce funnel are all covered, such as attraction (e.g. SEO), familiarity (e.g. ratings and reviews), sell (e.g. smooth checkout path) and retention (e.g. loyalty). In addition, there’s focus on B2C, B2B, B2B2C, B2G business models, such verticals and telco, financial, automotive, healthcare, etc. I mean, there are so many things that one could cover here that it’s simply not even worth starting.

Briefly: I think that the above three values show a very solid foundation for a platform to be successful.

But it’s also worth talking about the negative sides, too, no? Such as

  • Very long learning curve – The curve is not steep per se as – I’ve already written – anyone having classic education in design patterns and experience using Spring framework will be familiar with the basic concepts pretty soon. But the platform itself is complex for the following two reasons:
    • One must learn the business domain as well as terms like sourcing, omni-channel, up-sell, quotes, replenishment, etc. don’t come all natural for developers.
    • Hey, it’s a PLATFORM, right? Writing code is only a fraction of the work that needs to be done. Functional analysis must be done before even software design starts, to make use of as many OOTB features as possible and do not re-invent the wheel. Then, when everything seems to be clear comes coding, which must take into account the Hybris-specific parts of the system. And it’s also worth mentioning deployment that also has its own peculiarities.
  • Documentation – I must admit I was first astonished by the sheer amount of documents available to anyone. And I still do admire the effort that has been put in producing these documents including wiki pages, white papers, videos, etc. But you see, it’s never enough. And you bump into issues just right when you don’t expect it and you may look everywhere, it’s granted that you will not find the answer for a lot of your questions.

All in all, it’s been a pleasant experience to work with Hybris in the past year and I can say that I’ve learned a lot. I still have a lot of appetite to learn about the problems in the business domain (i.e. e-commerce) and how a key player in the industry solved them. I’m going to keep an eye on Hybris’ roadmap in the future, too, and share my thoughts on some of the topics – hope you’re interested!

Looking forward to your comments!

Best,

Gabor

 

EPAM and professional development services

March 30, 2016

Okay, I work for EPAM. And this is not a paid advertisement in any way whatsoever. And I’m not even planning to write about my current employer many times, but this could be a good starting post on a new blog, don’t you think so? 🙂 And you see, I just feel I must write about this topic, because so many people have misconceptions and misbeliefs as to what companies in our profession are doing that I think it’s worth clarifying at least what EPAM is doing. It might shed “good” lights on EPAM, eventually – hope you won’t regret it after reading this post. I try to be as objective as I can, I promise.

So, basically I’ve been in the services industry throughout all my professional career, which is ~18 years by now, as such, have seen a few companies doing the same business – differently. On services industry I mean when a company is giving consultancy to other companies who are working on their own products. Typically a services company doesn’t have their own products per se, but have talent instead.

I used to work for a Finnish company that had so strong ties to Nokia, the ex(?) phone maker, that essentially it couldn’t survive without it. Their business was mostly about how to be in good relationships with Nokia. Then I worked for another company, which was basically a start-up, which followed the same business model: we tried to do sell our expertise to SMBs … well, not so successfully.

EPAM is different, though. In 2009, when I joined the company, we were mostly extending the resource pool of other companies. Those companies knew what they wanted, they just didn’t have the necessary amount of people and working with EPAM allowed them to

  • Grow faster
  • Be cheaper
  • Grow by not putting the bar low, i.e. the quality of our work was satisfactory.

Let’s stick with quality for a while. It doesn’t come natural and it doesn’t come in an overnight. Serious effort must be put in it and it’ll take time – but patience will pay off. Let me give you a good example: LinkedIn keeps bombarding me with subscribing to Lynda, one of their recent acquisitions, for self-education. But I simply don’t need it, because there’s an equivalent at EPAM, not to mention that there are so many free resources out there, too. The point here is the EPAM puts tremendous amount of effort in education without which the company wouldn’t be where it stands today.

What about professional development services, you may ask. It was Forrester who used this term in their research first, titled The Forrester Wave – Software Product Development Services – Q1 2014. As they explained in their paper, not only product companies must possess new skills in today’s accelerated world, but services companies, too. And that means former outsourcing companies must not only be able to provide developers and testers, but

  1. Professionals from a number of other fields, like business analysts, solution/enterprise architects, DevOps, UX designers, etc.
  2. This cohesive team must be able to solve technical AND business problems alike.

Those companies, who cannot be up for this challenge will probably remain outsourcing companies offering body shopping, whereas the winners will be able to deliver services that Forrester calls PDS 2.0. This new type of service must assume that customers do/do not know what technical solution they need AND do/do not know what problems they are facing with now, let it be technical or business problem.

A successful PDS 2.0 company must be able to think with the customer’s head and work as a partner instead of just a vendor. We must be able to see upcoming risks & threats and warn the customer upfront with analysis that shows probability, impact, provides mitigation plan, etc. We must be firm to say NO if a decision jeopardizes delivery in short-term or strategy in long-term. We must think end-to-end, not only focusing on that the solution be delivered as requested, but also keep an eye on what might be coming and what customers ultimately need.

It’s damn hard, I must admit. Customers are not always looking for such services, i.e. they believe they know everything and don’t need advices. And in lots of cases, they’re mostly right. But who can afford not to listen to insights? We must mutually share information to be as successful as possible – ultimately, the customer’s success is our success, too, and not only form financial perspective, but because … we all want to be satisfied and happy, no?

</BrainDump>

Looking forward to reading about your thoughts, please don’t hesitate to share!

Best to all!

Introduction

March 20, 2016

Hi,

I’m Gabor Torok, a Hungarian IT professional. What you might want to know about me is that

  • I work for EPAM Systems as a solution architect
  • I’ve been doing programming, test and project management, development leadership and recently solution architecture in my career
  • I’m interested in everything that’s IT, but has been paying attention lately to Java Enterprise software development, e-commerce and product management.

I used to maintain a blog several years ago, when I was a mobile software engineer. But I stopped writing, when I switched to enterprise development – I simple had no time for it and there were enough other challenges. Fast forward to today, when I feel there’s so much to learn day-by-day and share with others. Maybe they’re interested and maybe they’re also willing to share their experiences. We just put our knowledge together and everyone becomes richer, right? 🙂

So, basically, that’s the point of this blog: braindump my thoughts on various IT related topics and start conversation with … You! Looking forward to it!

Cheers,

Gabor