Tag Archives: agile

Git cheatsheet for cli

#Git cheatsheet

Here is a cheatsheet on git.


Create

  • Clone an existing repo
    $ git clone ssh:://user@my.com/repo.git
  • Create a new local repo
    $ git init

Local Changes

  • Changed files in your working directory
    $ git status
  • Changes to tracked files
    $ git diff
  • Add all current changes to the next commit
    $ git add .
  • Add some changes in < file > to the next commit
    $ git add -p < file >
  • Commit all local changes in tracked files
    $ git commit -a
  • Commit previously staged changes
    $ git commit
  • Change the last commit
    $ git commit --amend

Commit History

  • Show all commits, starting with newest
    $ git log
  • Show changes over time for a specific file
    $ git log -p < file >
  • Who changes what and when in < file >
    $ git blame < file >

Branches & Tags

  • List all existing branches
    $ git branch -av
  • Switch HEAD branch
    $ git checkout < branch >
  • Create a new branch based on your current HEAD
    $ git branch < new-branch >
  • Create a new tracking branch based on a remote branch
    $ git checkout --track < remote/branch >
  • Delete a local branch
    $ git branch -d < branch >
  • Mark the current commit with a tag
    $ git tag < tag-name >

Update & Publish

  • List all currently configured remotes
    $ git remote -v
  • Show information about a remote
    $ git remote show < remote >
  • Add new remote repository, named < remote >
    $ git remote add < shortname > < url >
  • Download all changes from < remote >, but don't integrate into HEAD
    $ git fetch < remote >
  • Download changes and directly merge/integrate into HEAD
    $ git pull < remote > < branch >
  • Publish local changes on a remote
    $ git push < remote > < branch >
  • Delete a branch on the remote
    $ git branch -dr < remote/branch >
  • Publish your tags
    $ git push --tags

Merge & Rebase

  • Merge < branch > into your current HEAD
    $ git merge < branch >
  • Rebase your current HEAD onto < branch >
    $ git rebase < branch >
  • Abort a rebase
    $ git rebase --abort
  • Continue a rebase after resolving conflicts
    $ git rebase --continue
  • Use your configured merge tool to solve conflicts
    $ git mergetool
  • Use your editor to manually solve conflicts and (after resolving) mark file as resolved
    $ git add < resolved-file >
    $ git rm < resolved-file >

Undo

  • Discard all local changes in your working directory
    $ git reset --hard HEAD
  • Discard local changes in a specific file
    $ git checkout HEAD < file >
  • Revert a commit (by producing a new commit with contrary changes)
    $ git revert < commit >
  • Reset your HEAD pointer to a previous commit and discard all changes since
    $ git reset --hard < commit >
  • and preserve all changes as unstaged changes
    $ git reset < commit >
  • and preserve uncommitted local changes
    $ git reset --keep < commit >

Learn git

Docker goodness with git for builds

So, Docker continues to grow and gain adoption. Google, AWS, OpenStack, etc. are all building in docker utility. Here is a good synopsis of some of the myths about docker and its real benefits and shortcomings.

docker

But make no mistake, containerization is here and only going to grow. There is much activity about how VMWare is having to respond to containers and docker in particular, here, here and here.

What I’m now interested in for enterprise adoption is the building of interfaces leveraging the docker APIs to allow for ops to leverage this goodness, allow for separation of duties, and clean promotion to production for the enterprise. Its there for many languages, including:

    C#
    C++
    Erlang
    Dart
    Go
    Groovy
    Haskell
    Java
    JavaScript
    Perl
    PHP
    Python
    Ruby
    Rust
    Scala

Real Time BI and CI in OBIEE

I was fortunate enough to meet up with Stew at IOUG 2015 in Las Vegas, where he was once again peddling his elixir of Agile into the dark underworld of OBIEE BI. I saw Stewart give a seminar last year at IOUG 2014 where he was advocating for the XML native format of OBIEE 12c and how it was going to facilitate meaningful VCS for the OBIEE RPD at least. I went back to the ranch very excited by all of this, but again got the response of that won’t work here, we don’t work that way. Sighs.

Nonetheless, Stewart and Kevin have moved on from being ACEs to starting a new consultancy called Red Pill Analytics. They have some of their presentations and articles up on their main site and it is worth a trawl. I will try to write a bit more about this in the next couple of days, but an important idea to highlight is their active selling of development-as-a-service. So, the model is that you purchase a capacity – small, medium or large, and then fill the sprint backlog on a regular basis. It’s an agile contract and should work like any other agile structure. But you have access to some BI wizards in the model so it should inject some rapid pushout to prod of some deliverables.

I think that this is froody both because of what it can do to capacity-low OBIEE environments as well as what it demonstrates for the realization of capacity based rather than contract based external engagements with third party organizations. The first is a must have for organizations that have a local lack of talent but high demand and produces a feasible way forward for enterprise class platforms like OBIEE while still retaining ability to respond to the enhancement request stream from functional areas.

The second is all about a new model of sourcing talent and capacity for an organization from an external service in an agile model. This is great to see and I suspect that we will see it in many other areas as time moves forward. It truly is a simple but bold extension of the IaaS/PaaS models into pure sw development. It should lead to market efficiencies as well for hot areas as competition should lend itself to growth in the sector. OBIEE, Peoplecode, all of these areas could benefit from this.

CIO Magazine misunderstands the nature of Continuous Deployment

I still am fighting the good fight to encourage the organization to move more towards automation for deployment and testing, as well as smaller chunks of change to reduce risk. But the automation continues to prove hard and too many of our technologists seem crippled when considering smaller scope rather than always BDUF.

CIO UK magazine published an in which one of the counter-arguments made to the goodness of CD was the increased risk that it brings to the IT infrastructure. This is a clear misunderstanding of what CD does, how it does it, etc. There is some critique offered in Dave Farley’s article, . What I loved was the data that he presents from Amazon:

Amazon adopted a Continuous Deployment strategy a few years ago, they currently release into production once every 11.6 seconds. Since adopting this approach they have seen a 75% reduction in outages triggered by deployment and a 90% reduction in outage minutes.

And, of course, what this also points out to you is that they had the metrics to be able to measure the impact to that degree. Better data goes hand-in-hand with a shop that is able to automate, or even begin to automate deployment processes. How many of the critical pundits can measure their outages in minute precision, let alone do their upgrades in less than a minute?

I know that we had an example of two paths in this debate. We had to respond to shell shock. It took several people in our DC environment and much heated debate a week to get the patching ready and rolled up through dev/test/prod for about 500 machines. Alternatively, in our AWS instance, where everything is built automatically from a template, we patched the entire environment of twenty machines in less than 24 minutes start to finish. Being generous to the teams, I can estimate that there were three FTEs on it for at least five days each (I’m aggregating some of the individuals but I am low-balling here). That is roughly 14.4 minutes / machine. As a low estimate. The AWS was 1.2 minutes / machine. 14x. 14x. Why wouldn’t numbers like that make one jump?

Incremental vs Iterative development

Sat in on a talk about agile development and the problems with it at scale. I have seen this first-hand, with the usual suspects: We can’t all do that, we are not all developers. We can’t organize and align our teams, it’s too hard and we’re too big, working on too many different things. We can’t work in that particular way because we’re different, so we need to have a different development process.

All of this is untrue, which is not to say that it isn’t hard to make it work. But that’s really more about the difficulty of accepting change in culture and that most teams are pretty unorganized. There’s also a bit of “not invented here” syndrome when you operate at scale.

Still, the talk had a link to a where the author was stressing the differences between increment and iterate. Nicely done and so true. Semantics matter and the difference in these concepts is critical to allow for incremental, iterative delivery of value in technology. You’re going to work towards an optimal solution and you may well get parts of the present goal out in chunks.

PMOs not as useful in a world of agile management

The traditional [PMO](http://en.wikipedia.org/wiki/Project_management_office) is often painted as being the salvation of an organization out of control with bad projects littering the landscape. But unless its a good PMO, the entire exercise can lead to only additional overheads, blessing even more useless projects that should have been stopped. In reality, there are lower cost methods other than a new PMO that can save the organization from its own bad culture and lead to effective projects that return value, and a cessation of the bad ones that don’t measure up. Continue reading PMOs not as useful in a world of agile management