It’s been a while since I answered any questions about BESA, so I thought I’d respond to one of the concerns that’s been popping up since I’ve been talking about how the delivery of Building Effective Security Architectures actually works. In particular, it seems people are a bit skeptical about the learning potential of the peer reviewed homework, so let’s talk about that just a bit.
I remember back when I was in Jr. High and High School, and from time to time, the teacher would make us pass our papers to the person in front of us so that we could grade each other’s assignment. Back then, there were a lot of complaints, groans and the teenage refrain of:
“Why do we need to do this? Isn’t this your job? After all, you’re the teacher.”
And I have to admit that the first time I signed up for a program that used peer reviews among the cohort as the primary feedback mechanism—sometimes in cohorts of up to 70 people…
I was skeptical too.
I mean, I wanted to learn from the person running the course. I just wasn’t sure what value someone else looking at any of the work I did would actually have.
What if they gave me some bad advice?
So let me explain a little bit more about how and why this approach actually does you a lot more good than simply submitting your assignments to me for my direct feedback.
First off, the point of the exercises is that there aren’t any right answers. Sure, some answers will be better than others, and I know how I have completed all these exercises (we’ll come back to that in just a minute).
But the bias of the “bad advice” is pretty hard—unless there’s a low number of people in the cohort, and they’re all really lost. Each assignment is reviewed by up to 3 of your peers (depending on the ultimate size of the cohort), and they are asked some questions about your work so that you get their perspectives and insights which differed from yours.
Sure, sometimes, people get into arguments about “being right,” but we have some mechanisms in place to make sure those get addressed.
The main thing is that you get a much broader perspective than just from me. In fact, the more diverse the cohort is, the better. When we’ve done these with closed groups all from the same place, everyone kinda tends to follow the organizational line. This doesn’t really help get a diversity of opinions, and, actually, running those courses is much harder.
The second thing is that the nature of the feedback questions are about exploring other options than what you see. You’re trying to not only understand your peer’s answers and their perspective, but you’re also trying to share areas where they might improve or where you’ve thought of something that wasn’t part of their answers or solution.
What I found myself in participating in this exercise was something I’ve learned when I first ran a course on operating systems at a local, for-profit school in Kansas City a long time ago:
If you really want to really learn something, then try and teach it.
When you’re trying to explain something to someone else – effectively helping them with their homework – you need to validate your own understanding of the material too. And doing this is a lot more powerful than you might think, because it’s a safe environment, and there’s no pressure to get it “right.”
That’s what being able to “fail safe” really means.
And the last thing I want to say about how the peer reviews work is that I do actually personally review all of the assignment submissions myself. This is where I use my own answers as a guide so I’m not randomly making things up.
While I don’t give direct feedback to everyone, if there are recurring themes or particular issues that I think are worth mentioning, then this is part of what we talk about on the weekly Q&A call. And if I think it’s truly necessary, I will reach out directly to individuals who are having troubles or seem to need additional help.
The Q&A calls are also an important source of feedback on the quality and content of the peer reviews since they’re held after the peer reviews have been completed and everyone has had a chance to see and digest the comments they received.
Any issues or questions with the feedback can be raised, and particular areas where people have questions are then discussed with me, as well as with an open floor for the entire cohort in attendance. If you can’t make it, the calls are recorded, and any followups or things that might’ve been missed are then addressed asynchronously via the dedicated Slack workspace.
I have no idea if this all makes any difference to you or not. However, I hope that at least this addresses any concerns or questions you might have about how the cohort peer reviews work and what level of direct, live input you can expect as part of the cohort.
If it hasn’t, then please feel free to reply to this email with any other questions or concerns you might have. I will always do my best to reply, and, don’t forget that if you’re reading this and you’re also a subscriber to the Security Sanity print newsletter, you can ask me anything about anything I’m qualified to answer. It’s one of the perks of being a subscriber.
Or, if none of this makes any difference to you – or if learning how to easily build SABSA security architectures isn’t one of your goals – then that’s fine too.
But as far as the course goes, the clock is starting to tick. There’s just a few more weeks to get in to the next cohort and have live access to me for significantly less than any other offer we have.
If the above sounds like something that will help give you the skills and confidence to be a better security architect, then head over to this link and join the cohort:
Stay safe,
ast
—
Andrew S. Townley
Archistry Chief Executive