Automated software testing, the continuous process of validating the functionality of software and ensuring it meets required standards is easier said that done. For Customs and Border Protection, a misguided software update could wreak havoc on its operations, potentially bringing screenings and port operations to a halt.
Ken Oppenheimer, deputy executive director of the passenger systems program directorate at Customs and Border Protection, said the agency’s software encompasses all systems from the trusted traveler program and Global entry kiosks that allow visitors to move through the lines faster to more comprehensive traveler safety examination systems using biometric technology.
“We have a portfolio of roughly 90-plus different applications that we operate, support and have out in the field to support the mission, whether it’s land, sea or air entry, whether it be on the primary, the initial entry point into the U.S., or even some of the systems that we deal with on the back end,” Oppenheimer said on Federal Monthly Insights – Automated Software Testing.
This is why CBP makes every effort to mitigate the chance of failure like those experienced by multiple airlines this summer when a faulty software update caused the cancellation and delays of thousands of flights worldwide.
Customs and Border Protection is primarily a Java-based environment, and uses an open source tool for test automation. Oppenheimer said while it’s not a perfect solution, some applications are more difficult to automate, including operations like biometric fingerprinting. However, he and his team have come a long way in the last several years.
“Testing is something of a passion of mine. About seven years ago, I really made it a priority for our group, and it grew slowly, but I think we’ve got the rock over the top of the hill, and we’re gaining ground quickly,” Oppenheimer said on the Federal Drive with Tom Temin.
CBP works on writing test cases as the developers are writing code in hopes of staying up to date before a major release. They rely on a team that consists of members who fully understand the business requirements, and some that have been long standing members of the development team.
“We are not as mature as we want to be where we can write those in the same sprint. We’re typically about a sprint behind, but it does afford us the ability to do a full regression suite before that release,” he said.
Oppenheimer added this allows CBP to know what parts of its testing and upgrades need to be done manually ahead of releases. This also allows time to run testing several times to find any defects and bugs before full implementation.
The agency has tried to keep control of its upgrades even as they work with whole suite software suppliers like Microsoft. They maintain ownership of their applications. When their contractors are ready to issue a patch, they may have a window of two weeks to test it and make sure that there are no risks to the system.
But even as CBP manages this level of control over the process, it is working on completing a migration to the cloud. They have chosen to go with a services provider because of the benefit of improved uptime.
“I think what we get is redundancy as an easier solution and failover capability. . . you’re going to measure SLAs in a variety of different ways, where you’re going to look for uptime, downtime, response time, volume, a cadre of different metrics you want to follow, all to be forward leaning on understanding the performance of your applications,” Oppenheimer said.
Looking to the future, CBP, like others in government and in industry, is leaning toward artificial intelligence and machine learning.
“There’s probably a lot in the pipeline right now. It’s kind of hard to pick one out, but everybody’s talking AI. So we’re trying to understand how that’s going to impact us and where we are going to benefit from that going down the road,” Oppenheimer said.
Copyright
© 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.