Having gotten to grips with terraform over the past few months, there’s an awful lot I’ve learned from the multitude of useful posts and books published on the subject, but also from getting firsthand experience.
One of the design patters in terraform that has piqued my interest was the use of multiple providers, as given in the aws_vpc_peering_connection_accepter example in the terraform documentation. Though fairly lightly covered in the documentation, it does lend itself to a good many use cases.
The most popular one being the ability for the provider to assume a different role (even across different accounts) to setup and manage resources. I’ve found this incredibly useful as a design pattern, as it also allows for finer-grained permissions management of such resources. With the ability to call dynamic applications to provide creds to the AWS provider, using the credential_process parameter, it will no doubt be plausible to make sure that the only resource allocation in AWS is managed by your master terraform configuration.
How do you use multiple providers in terraform?
For the past few months I’ve been working back on Amazon Web Services (AWS) – trying to remember all the knowledge I curated during the earlier years of my career. It’s been really interesting to see quite how far AWS has developed since I first used it nearly a decade ago. Whilst I’d seen the headline stats published at the start of the ACloudGuru training course; it’s still a massive shock seeing quite how many services AWS now run out of the box!
Having now spent two months back in the AWS world, one of the things that really struck me was my dependency on other AWS engineers. Whilst my sweet spot is designing the systems at the architectural level – actually getting someone who understands the lower level intricacies of AWS is vastly important. The construction analogy is massively overused, but it really is congruent with a building architect working with an engineering team to make sure the whole thing doesn’t fall down.
One of the most frustrating things for me, was that some of the stuff that I’d consider ‘simple’ still have a relatively steep learning curve, and there’s a significant paradigm shift from network to host-based security. What this means in reality is that the IAM (Identity and Access Management) module should be your first port of call. Operating with the principle of least privilege is absolutely beautiful. I’d recommend that any old school SysAdmins really do need to go and sign up to ACloudGuru or Udemy to get clued up on the implications.
The next ‘hidden’ gem on AWS for me has been the EC2 parameter store; in conjunction with IAM roles and lambda. I do need to write a more detailed post on my setup once I’ve validated it’s not too heavily over engineered – but the combination of KMS keys, IAM roles, lambda (to run a simply random password generator), and EC2 parameter store does give me a warm glowing feeling inside. Setting something like this up 10 years ago was a feat of engineering and relative fragile (it wouldn’t survive a reboot!) – I really like the cleanliness of this new approach.
The one thing that does bother me is competition. I need to make some time to work with Azure to replicate the AWS environments on a competitor cloud (with all the support trimmings) – but also investigate minimising the barrier to exit from any cloud platform to running bare metal.