15:05
Perdeu, hein moleque a proa ST you say every like seu Expert. Nossa Seven mô.
Bird hard, even though I don’t work with a team or have my own team anymore.
Bring the complaints, uh, bring your push back.
Bring the complaints, uh, bring your push back.
I, I really kind of Look forward to it.
15:06
Perdeu, hein moleque a proa ST you say every like seu Expert. Nossa Seven mô.
Bird hard, even though I don’t work with a team or have my own team anymore.
Bring the complaints, uh, bring your push back.
Bring the complaints, uh, bring your push back.
I, I really kind of Look forward to it.
And the question is, where are you in your AI journey?
And the question is, where are you in your AI journey?
The options, all just that launch it now.
Can, you can can never see that.
Your casual is learning.
Maybe you’ve gotten to a point where your operationalizing and you standardize on AI in different elements of your?
Maybe you’ve gotten to a point where your operationalizing and you standardize on AI in different elements of your Sdlc?
Or maybe you’ve had bad experience and you’ve just Abandon the whole thing.
So, uh, Mark your choice, and I’d love to see what people say.
Hey, I was not expecting to see abandoning as their option.
That’s great.
We considering I could see.
I don’t know about it being that any yet.
But, Uh, oh, I, I have to submit some, submit it to see what people did.
Okay, All right, so people are kind of in the early experimentation.
Um, so they’re doing some things, and they’re looking for feedback.
So, this is really a great time to be asking this question of, like, where, where should people be seeing the return?
Um, in the folks that are operationalizing, maybe they can give some insight into where we should look for it, Uh, Seth, you cut out there for, like, three seconds on my end.
Were you addressing me?
Did everybody else hear me?
Did everybody else hear me?
I was just saying that most people are experimenting who’ve got some people that are operationalizing, um.
I was just saying that most people are experimenting who’ve got some people that are operationalizing, um, In the in the audience, but nobody nobody hasn’t dipped their toe yet.
We’re all kind of playing around.
I did see a Reginald asked.
I’d love to know why.
Reginald.
You want to just kind of Expand on that.
Why we’re abandoning, or is that was that in the context of abandoning?
The Fresno, okay, um, maybe you can answer in the text later, Um, All right, so He’s asking Jim, why?
Why Excellent, all right, um, CA, do you want to just kind of Now that you know the context of the the audience?
Do you want to say like kind of what you’re seeing?
As an analyst, we can start there.
Yeah, it’s.
It’s very much in line with what I would have expected both based on our survey data and and my conversations.
The experimenting phase is, is where everyone, it, or we’re most Of our clients are.
Using That code assistance to write code, Whether that be unit tests, whether that be simple bug fixes, it’s much.
Using That code assistance to write code, Whether that be unit tests, whether that be simple bug fixes, it’s much more being used as a companion rather than a true agent in an orchestration Network and the kind of the difference is, is The assistant tools.
AI is very much helping the human developer in the later stages where you see again the highest performers and usually this is smaller companies, Uh, where you see the highest performers?
Is this the flip where Developers are more helping agents right code than agents helping developers write code?
That’s where kind of everyone wants to get to, But that also requires the most disciplined and sort of forethought and and kind of iteration to get to that point.
That’s great, uh, I don’t know.
People saw the the presentation on Getting value from AI, but there’s this notion of, Um, defend, extend an offend I don’t know.
So, Um, I’ll just share that slide here.
Um, looks like I’m not able to do that, like I was able to let me just share my screen.
Can focus.
Do that.
So, Um, this model, we talk about this quite a lot.
You know, this is kind of like different.
What people’s Hopes are for Ai, and Um, the most modded is modest is defend, which is basically take people’s jobs as they are, and just give them better tools to make them more efficient.
Um, extend is maybe reworking whole Processes, and it sounds like this is kind of what CA might be talking about, like getting getting flipping it around, so it’s the um, the developer, helping the agent rather than The the, you know, the the agent, helping the developer and upending.
I don’t know where we would get to with this, but this could be like new products and offerings.
Um, it may be changing your your product all together to take.
Does that kind of?
Does that kind of Resonate with with UCA or other folks in there?
In one of these columns?
In one of these columns.
I could certainly say that it jives with with my kind of expectations.
The defend is is definitely the area where it’s like, okay, and it is a little bit funny, because when AI was becoming a thing, and it’s still, It’s still kind of is Getting this sort of push back.
It’s like, if we don’t, if we don’t give AI tooling to our developers to at least use.
Like, I said, writing documentation, writing unit test, debugging stuff.
Like I said, writing documentation, writing unit test, debugging stuff, Uh, they’re going to become miserable, knowing that AI can now do this, and they’re going to look elsewhere.
So, in that case with an SDLC, it’s very much a defend category.
Yeah, I mean, it’s, I’m seeing the same thing, Cody.
But it’s interesting that I think from above in the board.
Um, they’re looking at that extent.
You know, it’s just like, if we’re it depend is actually helping.
If we’re saving time, you know, we’re doing things faster.
Then, of course, you know, what is it?
Staff production, uh, you know, it’s like.
Just roll ships and role changes.
It does seem like there’s a difference in mindset and opinion between the.
Looking at the depend versus expectations for the extent?
Looking at the depend versus expectations for the extent?
I wonder if expectations Are higher than this should be is, and I think this is.
There was a prominent CEO from a prominent company that said, I am not hiring any more Engineers in 2025 and on because of AI.
We don’t want to invest in it and, and I’d love to hear.
We don’t want to invest in it and, and I’d love to hear Other’s opinion on that.
But that’s something I’m seeing a lot of.
We said, you know, we’re making Engineers spend more less time 15 less time, oh, so you can let goals 15 you know of your staff, and it just didn’t make sense so.
We said, you know, we’re making Engineers spend more less time 15 less time, oh, so you can let goals 15, you know of your staff, and it just didn’t make sense.
I don’t want to go there, you know, they’ve got seen anything like that.
You know, they’ve got seen anything like that, So?
You know, they’ve got seen anything like that.
So, from my standpoint, it’s funny because What you’re describing would be the clearest path to Roi, and I’m going to get to the flip side in a second, but I have not talked to one organization in my seven months at Gardener that has said we have laid off developers because we’re replacing them with AI.
It is, it is becoming an augmentation tool and that the only companies that are actually really struggling With AI and SDLC.
Now, I take calls about AI Elsewhere, which I won’t touch on.
That’s another Beast.
But when it comes to Ai and the sdlc, The only kind of question is, now it becomes more.
There’s a lot of pressure put back on leadership.
A lot of pressure put back on the product team.
You know, we’re waiting for you all.
You know, we’re waiting for you all, And it’s it’s putting pressure on organizations to really Define the value of features.
What is the dollar value of this feature.
If we get it out, the door and that’s not.
That’s not on the technical team to Define that.
That’s on the product team that’s on leadership to be more Intelligent about how they go about product planning and being able to tie Revenue, or Expenses to the release of a new feature, and then you can back into an Roi from the development team to say, well, we’re getting features out the door 20 faster with AI.
The wrong features.
The wrong features, I think exactly, exactly, and, and I always, I always really kind of.
Well, my job is to make sure that features get built as quickly and over the highest quality as possible.
That’s why we point to a lot of indicators and not so much direct Revenue or expense is on the tech on the tech side.
Yeah.
Um, and what types of value are those?
Like, what?
When you’re asked, show me the ROI.
What do they looking for?
What do they looking for?
Maybe I’ll.
So here’s.
So, here’s.
All right.
So, uh, if do people just want, like, uh, a dollar amount or people want some other measure of value?
For something else.
For something else.
So, mostly, it’s a dollar amount.
What is the cost of an outage?
What is the cost of an outage?
Uh, most companies can’t answer that question.
And so, then I say, well, how am I supposed to give you a dollar Roi either way, because I can say that AI will help you lower the amount of bugs that get into production.
But what is that worth to you?
I can’t answer that, And so that’s where I, I mean, like, it’s really forcing a lot of Back to the drawing board outside of the tech departments in in companies to really think about, well, you know, what is it worth and high performing Tech forward, uh, organizations where the tech is the product can answer these questions.
One of the most famous examples with Tom, I’m sure you’re aware of is well when, when Betty used to be called Facebook, they would fail fast.
They would just launch stuff knowing that they would put out bugs because features were worth so much money to them, and bugs didn’t cost them very much.
So, they made a cost decision to push out stuff quicker than spending a lot of time QA.
And they’re, like, ah, our users will figure it out and they’re not going to leave us.
They’ll help us to a so.
15:21
You can make decisions like that if you know what a new feature is worth versus, what a bug cost you, and that’s kind of what organizations are figuring out right now.
Question from Joe here.
Yeah.
Yeah, So I’m seeing that as well with my government.
I was good to eat that.
He brought that off.
I see that with Michael Rincliant as well, you know, it is.
It is about dollars.
That is a constant struggle, I see, um.
That is a constant struggle.
I see, um, With government specifically, so that’s a that’s a tricky one, you know, That’s a tricky one.
I, I think, All right, sorry, Seth.
Go ahead.
I was just going to say do people Have those discussions about, like, how many heads to give up with other, Uh, tooling or infrastructure expenses like Cd, CI, CD pipelines, and GitHub and Ides and all that.
I, I think that?
I, I think that The difference with AI is the marketing height that went behind it, which was always you’re going to be more productive.
It’s going to it’s a force multiplier.
It’s going to increase productivity, And I, I guess my point is, and it’s sort of Joe is making as well, it it may crease.
It may increase productivity.
But what does that mean from a dollar and cent standpoint?
Like, literally, You know here.
Bottom line or Top Line, then don’t do it, then don’t do it.
The only.
So you get into this conundrum where?
Where Okay, do?
Where Okay, do we just start number one?
Do we have more work that we can give to our our workers, who are now more productive.
Do we have more work?
And secondly, do we just reward productivity with more work because that leads to?
No, I’m good, so.
Yeah.
Yeah, It’s almost like you’re you’re doomed for that result.
If you’re asking that question of like how, how much more productive, how many heads.
Can I cut, right?
But it is, that’s interesting, though, because when work is done properly, you know, you get more productive and and spend less effort.
You know, AI can help you be more predictive than most reactive, you know.
I can help you write better tests.
It can make sure your your future is well understood.
All those kind of things.
So it, can, you know, having more predictive work makes you more productive and let you go home at five o’clock.
You know what I mean?
So there’s a little bit of a balance there.
Um, I found that you know the teams that that were able to do the behaviors that made them more predictive than reactive.
We’re happy to take on more work generally.
You know, I mean, there’s a limit.
There’s a limit when you start getting asked.
It invade your yeah, uh, some amount of time.
Then, it begins to really hurt you.
Um, So some yeah, I think I, I agree, but it really depends on how you’re using the AI to enhance your abilities.
I’m curious to know, uh, the about ending Legacy systems.
Are we talking about kind of Migrating off of like Cobalt systems?
Is that what we’re talking about?
Are we talking about ending like Legacy processes where, like All the tribal knowledge is maintained in one senior developers head who’s been there for 50 years, and we can’t let them go, or they can’t get hit by a bus, or they can’t win the lottery, because if they’re if they don’t show up for work the next day, we’re kind of, uh, we’re in a tough situation.
So, by the way, just asking for clarification because I’ve heard both and how AI can help with both of those things.
And so let me without without kind of.
I’ll just touch on both of those things start in Reverse order.
The one thing, oh, there we go, boat, awesome.
So, So going with the latter side first, where I call it hero developers, where they’re basically just developers who’ve been an organizations forever, and they have all the knowledge, and they’re basically they’re super Useful.
And they’re not intentionally holding the organization hostage.
I very rarely met a developer who’s like, I’m keeping this all in my brain, so they can’t fire me Very rarely.
Have I met anyone like that, but just by the sake of the they’ve been there for so long, and that they don’t really have the time or the, uh, the incentive to go back and document everything that comes to their head.
It’s just becomes they’ve become the tribal Chief of of the knowledge of an old system, literally.
Talk to an organization that employs Part-Time two Cobalt developers because they’re on an old Cobalt Mainframe, Uh.
And these two part-time developers are in a nursing home, and they have to keep them on board because they’re the only people who know how their mainframe system works.
So, point is.
Ai actually forces an end to all that and what I mean by that is like Ai.
Hey, I didn’t know he doesn’t stop and ask, hey.
Hey, I didn’t know he doesn’t stop and ask, hey, How do we do this in our organization again?
Or where is this?
Where is this document that explains our database schema?
Or where is this, they just?
They just make the best assumptions they can, and we all know what happens when people make a lot of assumptions.
So When?
So, We want to make sure AI has as few assumptions as possible.
And that’s why you see these high performing teams They have generally.
So, you have, like, an implementation team that builds the product, but?
So, you have, like, an implementation team that builds the product, but The AI that they are using to build a product is Constrained and fed by what the platform team is providing to that those AI agents, so the and the only way that will work is if you do go through the process of getting all of that information out of the humans head and into a system That agents can easily retrieve it.
And when you do, you kind of get a scenario where, Like that, that old scene in The Matrix where Neo is trying to escape from the agents, and he needs to know how to fly a helicopter, and he calls Morpheus.
He’s, like, I need to know how to fly a b92 helicopter, and it’s pumped right into his brain.
And then he knows how to fly the helicopter.
That’s what we need to be able to do with our agents is pump them the information as soon as they need it, so they don’t start making assumptions and crash the helicopter.
So, that’s kind of what I meant by ending Legacy systems in terms of processes and things like that.
On the other hand, when we’re talking about, how do we actually lift and shift Legacy systems like an old Cobalt system or do a job of modernization product.
That is also somewhere where AI can assist greatly because it is able to number one.
AI is very good at translating things, whether that’s English to French or Cobalt.
A Java, it’s very good at translating things.
So, in terms of taking the um Code and just rewriting it, it can do that very quickly.
Additionally, it can also Do something where it can help us extract any missing tests from an existing system.
So, the point being is, if you have a giant Cobalt system.
And let’s say you want to rewrite it, and C, sharp or Java or whatever You basically have.
AI do as much Automated test coverage around that old system as possible, And then make sure that those tests are all passing And then rewrite the The, the Cobalt and Java, and then apply those same tests to the new Java code.
And you know, I’m kind of simplifying this whole process, and then if all your tests don’t pass well, then your Java does what your Cobalt used to do, right?
Um, and again, like, I said, simplifying that greatly.
But there is great expediency in that process when you’re using AI to do those things that would normally take a year worth of just typing at a keyboard.
Don’t.
Um, We talked about kind of Value, and whether it’s a positive Roi or negative Roi, I was thinking that we might want to shift the conversation to, Uh, like, Where is the most value, right?
So, what elements of the STLC give you the most leverage, or what are high performing teams do?
What?
What’s the?
What’s the model Of that?
We should all be striving for getting the most value, regardless of what the investment is.
Yeah, I’m curious if anyone has any thoughts on that before I launch off on my own.
My next, my next tangent.
All, right.
Well, let me let me dive in first.
What high performing teams are doing So?
This also makes a big assumption that we’ve kind of talked about.
Is that feature development is worth something to you?
Writing new code is, is where something to you, whether that’s an external Basing application that generates Revenue or an internal application that increases the productivity of your other Employees.
So, What these high performing teams are doing is kind of something that I touched on.
They’re creating a bifurcation between Implementation developers and and these these AI platform developers who are equipping them with the tools to be more productive.
What that looks like, in actuality, is a number, uh teams are getting implementation.
Teams are going much smaller.
So, Because we need We need fewer developers Focused per feature, We’re able to almost create a team per project.
So, now, instead of an entire team working on, like, uh, you know, project that their project of a project, There is a team small enough that works on one project.
Well, another team is working on other project Above them.
Is that platform team that platform team is in charge of, you know, selecting which AI coding agents we’re going to use, uh, creating that back pressure, maintaining that context, assuring the guard rails are in place right There, then equipping those implementation teams to write to create high quality features faster and what that.
The analogy that that is starting to look like, is this is that You have the platform team Puts in all the guard rails, the paved roads.
They basically pave the roads for the implementation teams to drive on right, And so, then you have product team handing the development team specs.
15:34
If we look at this as somewhere other than a software development, we looked at this like a 3D printer, for example.
The platform teams are building the 3D printers, right?
They’re putting they’re.
They’re building a 3D printers.
The implementation team Is.
The implementation team Is taking the instructions and the instructions are the purview of the product team for taking the instructions for specs for new features or whatnot, and they’re putting it into the 3D printer.
They’re waiting around.
Let’s just say for it’s like, it’s like a sword, right?
They’re waiting around for their sword.
The sword comes out of the 3D printer.
The implementation team looks at the sword, and it’s like, does this?
Is this sword, what this these instructions ask for?
What went wrong?
We run it again.
Is this what we wanted?
Is this what we wanted?
Yes, okay, great.
Let’s move on to the next thing.
What is a big change for these developers who are operating like this is they have to fight the urge to pull out their little pocket knife and start.
Start whittling away at that sword, because then Your specs are never going to get any better.
Your 3D printers are never getting any better if you’re if you’re doing the job of the specs and the Machine for them, then you’re defeating the entire purpose and the process will never get better.
But the teams that are treating like a 3D printer and just throwing out throw it, throwing out the code that didn’t work.
Starting again, Throwing out the code you didn’t work and constantly iterating.
They’re the teams that are seeing the highest thorough put in terms of Code quality code being released to production, and it takes a while, for teams to be able to get to the point that I just said like, Even on my small teams where I implemented this.
It took months of throwing stuff away until we get to that 3D printer model where we can just put instructions in and get something out the door, so that’s a long time.
That’s a long time to be able to harness process.
Now, I will say, industry, and these vendors have started releasing tools that will assist you help assist you in building those better specs and assist you in building those better 3D printers.
But once again, that’s all investment.
You know, that doesn’t come for free.
And so it goes back to what is a feature work.
Um, but anyway, that’s kind of what high performing teams are moving into.
Now, It’s really interesting.
Um, One of the things that in my history, there was a lot of discussions of low code environments versus AI development right, and it’s kind of like the same, the same goal, but different approaches right.
And A lot of my clients were thinking, well, AI development is better because you can eject right, like you can kind of like if it’s low code, you kind of it stays in the low code, right?
You can’t.
If you run into some, Um, some constraint that you didn’t expect, you’re just kind of stuck with it.
You’re more flexibility going forward.
You should never pull out the.
With low code, there is.
And again, the reason is.
And again.
The reason is, We want The specs to be the source of Truth.
We want those to always match the output, right, the minute that we take the output and change it.
It no longer matches the spec, and so there’s a disconnect between the spec and the code, and that pre that confuses agents who will come through later because they’re going to use the specs as documentation for what is, and then they’re going to get in there.
And it’s not going to actually be what is?
Yeah, yeah, That makes a lot of sense.
One of my colleagues, kind of, I, I said, I’m starting to view llms as a new type of compiler and, and I think that one of my colleagues put it better, it’s more of an intent compiler.
So, the if you think of the 3D printer, that intent compiler takes the specs, which is the intent and turns it into code.
And then you have the traditional compiler, which translates the code into machine language kind of thing.
Yeah.
Yeah.
We’ve seen this Evolution before, you know, an IDE is came out.
You know, there was like, you know, I was, I was a resistor to IBEZ I was using, you know, emacs and things like that, and then I’m not gonna use that too much, you know.
And then, I mean, it really is just the next Evolution, but I think the problem is a lot of people thought.
Well, now you haven’t a computer that, uh, or an AI agent that knows Java better or whatever language better better than any human, you know, we don’t need the programmer anymore, but that’s not the value of the software engineer.
It is the way they think and how they continuously learn and everything else.
So I, I think it is just another tool and and the way, uh, and the way there.
This is a couple things I did survival touch back, kind of get back to a little bit of the Roi.
One of the big debates was always, you know.
Are you delivering value?
You know, I’m talking to you as a CTO.
Are you delivering value proof to me?
You’re delivering value, and I think you touched on this earlier.
Well, value is something the business determines.
You know that it is so, like the products we create, what value is it in our Roi and your and your investment is in more members.
Is it?
What is it that Looking for you can’t ask technologists if they’re delivering diet?
You can ask technologists if they’re getting more efficient and how productive they are.
And I think that that kind of, you know, keeping that differentiation is super important because, you know, and I think that’s ultimately where we end up with, like door, and things like that.
We can show you that we’re very efficient.
We can show you that we’re getting more physics.
Is it valuable love?
That’s that’s determined by the work that’s given to us.
Is that valuable that we’re doing that?
And I think that’s where the lines get messed up a little bit, Uh.
And, and when you talk about and turn on investment, okay, what do you mean that, you know?
Yeah, I’m going faster.
You know, what is that?
What does that mean to you?
And I think we can shift the conversation if you can differentiate between those.
Two, like you know, are, is there a backlog work that you need done, you know?
Is it valuable to you to do it faster, or do you want to do it slower and spend less money on it.
That’s, that’s a conversation you can have, you know, And I think that’s that’s part of this cell Anyway, just to bring that back.
Yeah, that’s a very, very good way to put it.
I, I could measure what you’re asking me to do.
I can, I can measure whether I’m getting better at doing what you’re asking me to do.
Yeah, I can’t tell you what you’re asking me to do is providing you with value.
Like, yeah, Exactly.
And that’s.
And that’s.
And that’s a business discussion.
And, and you know, that’s, you know, what are we used to try things.
Like, when we were, you know, we would Define a feature part of it, was, you know, assigning a business, you know, value to it.
We had this arbitrary scale.
We tried all different kinds of things to do, that it’s a difficult thing to do.
There’s no question about it, But I mean, if you’re doing a startup, for example, and CA you’ve done it, I’ve done it, you know, Stephanie’s done, You know, to get investment dollars, you got to show what what your return on that, and they don’t care about how to develop, you know, your product developed, you know, product of your developers?
Anyone know what I’m not going to get a return on the work you’re doing, and you have to be really good at if it’s somehow.
When companies get bigger, bigger, and in governments that that gets lost often, you know?
So, um, Yeah, it becomes becomes a different conversation.
But Yeah, I have seen God.
I’m just gonna say I, I have seen something that’s become new.
I, I hadn’t seen even when I was a startup world.
Is that now, agile teams are starting to bring in Business, the business side of things, and so, when work is getting planned, it’s getting effer points.
How hard is this to build, but then business is assignment value points?
Yes, yeah, so again, that is again.
That’s don’t ask me, you don’t really know until you release it.
Whether provided value or not, but at least you have another perspective to add sort of value, points to it, and one other thing that’s important On that front that I think that organizations are not doing in their really should.
And that’s they’re using correlation instead of causation to calculate Roi.
I don’t know whatever it is.
I don’t know whatever it is 200 000 on GitHub co-pilot licenses last quarter, And we saw whatever it is, you know, like, uh, 10 higher Increase in releases right, whatever whatever that metric is, But they’re not actually tying If the developers were using AI to build those releases and what I mean by that is There’s a big gap there, because okay, you spent that much money, and you saw an increase in 10 of 10 releases, but where all of your developers using the AI tools, or was that just some of them that are comfortable using AI tools, because if if you’re seeing a 10 increase in releases and only 10 of your license are being used.
That’s high efficiency in terms of output, but not high, efficiency in terms of usage.
So, organizations also need to get better at saying exactly how much AI usage is leading to exactly what percentage changes in whatever their measuring.
Yeah, You know, and I wonder the other thing is we, we did this a lot to ourselves.
In technology, you know, show me where this investment is exactly, you know, but you think about using AI in in the legal world that I’m involved in.
Some contract writing right now, Uh, and man them efficiency to a lawyer to a law firm using AI to to look for case law to kind of write things appropriately is huge.
I mean, they’re getting 60 70 benefit, but you don’t.
You don’t hear the same conversation that we’re hating, you know, around, like, well, okay.
So, which lawyers actually did this, then?
How many can we fire, you know?
15:45
It feels weird that we do this from a technical perspective, but in other areas we may not necessarily do that, you know, what I mean?
I think I think to your point.
The reason is, is because law firms can say, now we can take on 20 more cases, right?
Right?
That’s the point.
Now, we can do more work, right?
And they know that, is, they know, their value on more cases.
Organizations don’t know if they’re if there’s more value in writing more code they don’t know.
Yeah, yeah, good point.
Oh, again, in the government space where you know it’s a whole different animal.
That’s it’s, And that’s.
That’s one that’s really tough, not to crack for sure.
Interesting, Um, There are, Uh, considerable amount that are in that experimentation phase.
I was wondering if we could talk about, like, how long, Uh, companies typically take To start to see value?
Whatever, that is To say, like, oh yeah, I’m glad really glad we did this.
Like what?
Like, what’s what’s?
Like, what’s what’s?
Um, a timeline for, for how long it takes for a company to fully absorb The the power of of the tools and start to benefit from them?
Yeah, so number one, I will say that High, even the highest performing or the highest, you know, usage companies with AIMS DLC.
They’re always experimented, they’re always experimenting, and that’s kind of the point of that platform team.
Is that platform team is always Looking at what’s coming out down the pike?
What?
What should they add to the pipeline, blah, blah, blah.
They’re always experimenting, but I know that’s not what you actually, you know, in terms of like from piloting new New tools, new AI based tools in the SDLC That that Time, from implementation to I’m happy with this or I’m not happy with this, that’s gone down considerably, uh, when I just in the time I’ve been at Gartner.
When I first started here, that was six months.
That was six months, But a lot of that was overcoming that developer resistance that I was talking about.
A lot of that was, we can’t even get our developers to start using these tools.
Now, that’s really not a problem.
So, generally speaking, these this experimentation phase is shrinking, mainly because developers now want to use AI.
For the most part, I understand there is still a contingent of developers.
You push back against it, but There is.
There’s much more willingness and less resistance to overcome, so that has gone from six months.
You know, I can’t exactly say it differs from from organization, organization, and a lot of organizations.
Uh, what they do with their pilot program are experimentation.
Phase is rolling out to small Pockets of teams or random individuals within teams, so they get a true Apples to Apples comparison, and those companies are taking a much companies and organizations that are taking a much more analytical approach.
By definition, have a slower rollout so that you know, I’m, I’m thinking, it’s anywhere between four and eight to ten months now.
So, if you’re not seeing anything within That time frame, you should maybe reconsider your approach.
Or Yeah, like, You should double down or update.
I gotta straight.
Is that Eng?
Is that Engineers, For the most part, and I’m not saying anything revolutionary here are not like me?
They’re not like talkative social creatures, right?
They don’t have great communication skills.
That is one of the most important skills you need when you’re dealing with AI, and so a lot of time.
Organizations waste time trying to upskill all of their developers in the use of AI, whereas it’s much more beneficial to find your AI forward developers and equip them with as much training as they want.
And then the AIS, the the developers.
You will never be awesome and prompting.
Or, you know, talking with AI, equipping them with just the amount of education they need to not make huge mistakes with AI, right?
So, but trying to upskill everyone equally is a Fool’s Aaron, And it never works.
Um, and we only have a few minutes left.
So, do we want to see if anyone has any questions out there related to this one?
Anything.
In fact, If you’re having a question, we kind of dominated a little bit here.
But, uh, any thoughts, questions, ideas?
You guys want to discuss with the last few minutes that we have Night.
I can more than happy to bring up another.
One of the A sl?
One of the A slc issues and talking about low code or not.
You know, Legacy systems that are out there and I don’t know if this has changed a CAF, you know, but I know, uh, there was promise that you know there would be AI agents that could go there and look at your Cobalt code and figure out what your system’s did, and I’ve heard those were not not happening.
And um, Last time I tried this, I’m doing this with a client right now.
We did something similar to which you talked about.
We, we had a claim system, um, old claim system.
They want to replace it nobody’s around anymore.
Nobody you know, it just works, you know, and and when it breaks it, fix it.
So, so they approach we took.
It was a, hey, let’s take the inputs to this claim system.
Let’s see upwards.
It’s claim system.
We can use AI and that, and we can, You know, we’re using ATD and specification by example.
We can kind of go back and build the specification by example of all these claims and run through this over and over and over again and final, you know?
That’s what basically built the requirements from the running system, but we didn’t see, you know, and that’s actually working pretty well.
And then from there, you can go right into it.
Like, you said, telling a new system, you can actually, Um, Use AI to take those you, those those scenarios, and build things, Um, you know, have we have progressed at all with having having AI look at at large code bases and say, this is the requirements that are that that are in here.
And there’s some tweaks here.
Have we seen that?
Have you seen it?
Is that something people can look forward to, or yeah, stuff where, where I’m at.
So, yeah, so here’s, here’s the the problem is, is everything that I have to go on?
That is anecdotal.
Um, and based on vendor feedback, the reason is, Uh, and I’ll tell you exactly what I know about it, but I In my tradition can’t empirically test that I can empirically test Claude code and all that room.
I don’t have a Cobalt system to modernize, right?
Yeah, but but, But I will say that there.
There are vendors AI vendors that specialize in that, and I have heard good things about that, uh, Amazon.
Q Oh gosh, that I can’t remember exactly what it’s called.
They have a special carve out of cue for modernizing Legacy systems.
Her great things about that, Um, Source graph amp is another one that Geared toward understanding very large systems and almost extracting the meeting out of that swim is another one which I have used, not on cobalt systems, but have you and that is literally just Taking very large systems And pulling out all the documentation doesn’t try to write code.
It’s not trying to write code.
It basically just tries to explain everything that’s going on in this Tangled mess.
That’s been here for 50.
Which then you could then give to another agent to rerun that right swim.
Is there one and another one that just came out that that I actually do have a trial for.
Unfortunately, I just don’t have a system large enough to try it out is, uh, IBM, Bob, And Bob is very similar to Claude.
But when I spoke to IBM, they’re kind of promises that, it’s it’s.
It thrives on being able to to modernization, so and that’s still so new that I haven’t talked to any organizations that have actually used it, but it from the demos, it looked.
So, to think that thing?
So, to think that thing, That’s cool.
And thanks CA for.
And thanks CA for Sharing your your observations and everything.
I’m just going to share, Um, the Slide here.
So, Uh, Tom and I are going to be doing this.
We’re shooting for every every month, and Um, So if you have topics you’d like to talk with your peers, please email us and we’ll get on agenda.
We’ll choose the format that’s right for the discussion or, But if you want to learn about what Here’s are doing, this is probably the best way to do it best way to identify folks that are struggling with the same challenges you are, or maybe have information that you can learn from.
So, Uh, any other parting where it’s time Rca?
Excellent.
All right, Thanks everybody.
Thanks everyone.
We’ll talk soon.
Take care.