Generative AI is transforming many industries where people create content. Software development is no different; AI agents are in almost every development platform. But is AI improving application development and software quality? This episode of the Tech Field Day Podcast looks at some of the issues revolving around AI and App Dev with Alastair Cooke, Guy Currier, Jack Poller, and Stephen Foskett. The ultimate objective of a software development team is to deliver an application that fulfills a business need and helps the organization be more successful. An AI that can recommend basic code snippets doesn’t move that needle far. More sophistication is needed to get value from AI in the development process. The objective should be to have AI handle the repetitive tasks and allow humans to focus on innovative tasks where generative AI is less capable. AI agents must handle building tests and reviewing code for security and correctness to enable developers to concentrate on building better applications that help organizations.
The ultimate objective of a software development team is to deliver an application that fulfils a business need and helps the organization be more successful. An AI that can recommend basic code snippets doesn’t move that needle far. More sophistication is needed to get value from AI in the development process. The objective should be to have AI handle the repetitive tasks and allow humans to focus on innovative tasks where generative AI is less capable. A vital first step is making the AI aware of the unique parts of the organization where it is used, such as the standards, existing applications and data. A human developer is more effective as they learn more about the team and organization where they work, and so can an AI assistant.
One of the ways AI can be used to improve software development is in data normalization, taking a diverse set of data and presenting it in a way that allows simple access to that data. An example is a data lake with social media content, email archives, and copies of past transactions from our sales application, all in one place. An AI tool that reads the unstructured social media and emails, presenting it as more structured data for SQL-based querying. Handling these types of low-precision data is an ideal generative AI task; reporting on the exact data in the sales records is not somewhere we want hallucinations. Generative AI might also be great for working out my address from my vague description rather than demanding that I enter my street address and postcode precisely as they are recorded in the postal service database.
Software testing is another place where AI assistants or agents can help by taking care of routine and tedious tasks. Testing every new feature is essential to automating software development and deployment, but writing tests is much less satisfying than writing new features. An AI agent that creates the tests from a description of how the feature should work is a massive assistance to a developer and ensures code quality through good test coverage. Similarly, AI-based code review can reduce the effort required to ensure new developers write good code and implement new features well. Reviews for style, correctness, and security are all critical for software quality. Both testing and code review are vital parts of good software development and take considerable developer effort. Reducing these tedious tasks would leave more time for developers to work on innovation and align better with business needs.
The challenge of AI agents and assistants is that we don’t yet trust the results and still need a human to review any changes proposed by the AI. Tabnine reports that up to 50% of the changes suggested by their AI are accepted without modification. That leaves 50% of suggestions that aren’t wholly acceptable. That rate must be much higher before this AI can operate without human oversight. Ideally, the AI could identify which changes will likely be accepted and flag a confidence rating. Over time, we might set a confidence threshold that requires human review. Similarly, we might take a manufacturing approach to code reviews and tests. Allow the AI to operate autonomously and sample test the resulting code every ten or hundred changes.
AI is the Enabler of Network Innovation
Nov 05, 2024
Artificial Intelligence is creating the kind of paradigm shifts not seen since the cloud revolution. Everyone is changing the way their IT infrastructure operates in order to make AI work better. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by John Freeman, Scott Robohn, and Ron Westfall as they discuss how AI is driving innovation in the networking market. They talk about how the toolsets are changing to incorporate AI features as well as how the need to push massive amounts of data into LLMs and generative AI constructs is creating opportunities for companies to show innovation. They also talk about how Ethernet is becoming ascendant in the AI market.
Artificial Intelligence is creating the kind of paradigm shifts not seen since the cloud revolution. Everyone is changing the way their IT infrastructure operates in order to make AI work better. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by John Freeman, Scott Robohn, and Ron Westfall as they discuss how AI is driving innovation in the networking market. They talk about how the toolsets are changing to incorporate AI features as well as how the need to push massive amounts of data into LLMs and generative AI constructs is creating opportunities for companies to show innovation. They also talk about how Ethernet is becoming ascendant in the AI market.
Modern network operations and engineering teams have a bevy of tools they need to leverage like Python, GitHub, and cloud platforms. AI is just another one of those tools, such as using natural language conversational interfaces to glean information from a dashboard. This can also be seen in the way that AI is having a societal impact on the way that we live and work. The move toward incorporating AI into every aspect of software can’t help but sweep up networking as well.
Large amounts of data are being sent to large language models (LLMs) for storage and processing. Much like the big data crazy of years gone by we’re pushing more and more information into systems that will operate on it to discover context and meaning. Even more than before, however, is the need to deliver the data to the AI compute clusters that need to do the operations. The idea of data gravity is lost when the AI clusters have an even stronger pull. That means that the network must be optimized even more than ever before.
Ethernet is quickly becoming the more preferred alternative to traditional InfiniBand. While there are clear advantages in some use cases, InfiniBand’s dominance is waning as Ethernet fabrics gain ground in performance. When you add in the ease with which Ethernet can scale to hundreds of thousands of nodes you can see why providers, especially those that are offering AI-as-a-Service, would prefer to install Ethernet today instead of spending money on a technology that has an uncertain future.
Lastly, we discuss what happens if the AI bubble finally bursts and what may drive innovation in the market from there. This isn’t the first time that networking has faced a challenge from drivers of feature development. It wasn’t that long ago that OpenFlow and SDN were the hottest ticket around and everything was going to be running in software sooner or later. While that trend has definitely cooled we now see the benefits of the innovation it spurred and how we can continue to create value even if the primary driver for that innovation is now a footnote.
Edge Computing is a Melting Pot of Technology
Oct 29, 2024
Edge computing is one of the areas where we see startup vendors offering innovative solutions, enabling applications to operate where the business operates rather than where the IT team sit. This episode of the Tech Field Day podcast focuses on the melting pot of edge computing and features Guy Currier, John Osmon, Ivan McPhee, and host Alastair Cooke, all of whom attended the recent Edge Field Day in September. To accommodate the unique nature of the diverse and unusual locations where businesses operate, many different technologies are brought together to form the melting pot of edge computing. Containers and AI applications are coming from the massive public cloud data centres to a range of embedded computers on factory floors, industrial sites, and farm equipment. ARM CPUs, sensors, and low-power hardware accelerators are coming from mobile phones to power applications in new locations. Enterprise organizations must still control and manage data and applications across these locations and platforms. Security must be built into the edge from the beginning; edge computing often happens in an unsecured location and often with no human oversight. This melting pot of technology and innovation makes edge computing an innovative part of IT.
The edge computing landscape sometimes feels like a cross between the public cloud and ROBO, yet edge computing is neither of these things. The collection of unique drivers bringing advanced applications and platforms to ever more remote locations requires a unique collection of capabilities. Edge computing is a melting pot of existing technologies and techniques, with innovation filling the gaps to bring real business value.
The original AI meme application, Hotdog or Not, has become a farming application, Weed or Crop. An AI application runs on a computer equipped with cameras and mounted to a tractor as it drives down the rows in a field, identifying whether the plants it sees are the desired crop or an undesirable weed. The weeds get zapped with a laser, so there is no need for chemical weed killers as the tractor physically targets individual pest plants. The AI runs on a specialized computer designed to survive hostile conditions on a farm, such as dust, rain, heat, and cold. The tractor needs some of the capabilities of a mobile phone, connectivity back to a central control and management system, plus operation on a limited power supply. Is there enough power to run an NVIDIA H100 GPU on the tractor? I doubt it. This Weed vs Crop AI must run on a low-power accelerator on the tractor. Self-driving capabilities get melted into the solution; a tractor that drives itself can keep roaming the field all day. Freed from the limitations of a human driver, the tractor can move slower and may even use solar power for continuous operation.
There is an argument that the edge is the same as the cloud, a tiny cloud located where the data is generated and a response is required. This often has a foundation in attempts to solve edge problems by being cloud-first and reusing cloud-native technologies at edge locations. From the broader business perspective, cloud and edge are implementation details for gaining insight, agility, and profit. The implementation details are very different. Simply lifting methodologies and technologies from a large data centre and applying them to every restaurant in your burger chain is unlikely to end well. Containerization of applications has also been seen as a cloud technology that is easily applied to the edge. Containers are a great way to package an application for distribution, and the edge is a very distributed use case. At the edge, we often need these containers to run on small and resource-limited devices. Edge locations usually have little elasticity, which is a core feature of public cloud infrastructure. Container orchestration must be lightweight and self-contained at the edge. Management through a cloud service is good, but disconnected operation is essential.
Surprisingly, edge locations also lack the ubiquitous connectivity part of the NIST cloud definition. Individual edge sites seldom have redundant network links and usually have low-cost links with low service levels. Applications running at an edge location must be able to operate when there is no off-site network connectivity. The edge location might be a gas station operating in a snowstorm; the pumps must keep running even if the phone lines are down. This feels more like a laptop user use case, where the device may be disconnected, and IT support is usually remote. Device fleet management is essential for edge deployments. A thousand retail locations will have more than a thousand computers, so managing the fleet through policies and profiles is far better than one by one.
Security at the edge also differs from data centre and cloud security; edge locations seldom have physical security controls. Even our staff working for minimum wage at these locations may not be trusted. The idea of zero trust gets melted into many edge computing solutions. Validating every part of the device and application startup to ensure nothing has been tampered with or removed. Zero trust may extend to the device’s supply chain when sent to the edge location. Many edge platform vendors pride themselves on the ability of an untrained worker to deploy the device at the edge, a long way from the safe-hands deployments we see in public cloud and enterprise data centres.
Edge computing has a unique set of challenges that demand multiple technologies combined in new ways to fulfil business requirements. This melting pot of technologies is producing new solutions and unlocking value in new use cases.
Public Cloud computing is a large part of enterprise IT alongside on-premises computing. Many organizations that had a cloud-first approach and are now gaining value from on-premises private clouds and seeing their changing business needs leading to changing cloud use. This episode of the Tech Field Day podcast delves into the complexity of multiple cloud providers and features Maciej Lelusz, Jack Poller, Justin Warren, and host Alastair Cooke, all attendees at Cloud Field Day. The awareness of changing business needs is causing some re-thinking of how businesses use cloud platforms, possibly moving away from using cloud vendor specific services to bare VMs. VMs are far simpler to move from one cloud to another, or between public cloud and private cloud platforms. Over time, the market will speak and if there are too many cloud providers, we will see mergers, acquisitions or failures of smaller specialized cloud providers. In the meantime, choosing where to put which application for the best outcome can be a challenge for businesses.
Public Cloud computing is a large part of enterprise IT alongside on-premises computing. Many organizations that had a cloud-first approach and are now gaining value from on-premises private clouds and seeing their changing business needs leading to changing cloud use. Whether it a return to on-premises private clouds or moving applications between cloud providers, mobility and choice are important for accommodating changing needs.
In the early days of public cloud adoption, on-premises cloud was more of an aspiration than a reality. Over the years, private cloud has become a reality for many organisations, even if the main service delivered is a VM, rather than rich application services. If VMs are the tool of mobility between public clouds, then VMs are quite sufficient for mobility to private clouds. The biggest challenge in private cloud is that VMware by Broadcom has refocussed and repriced the most common private cloud platform. The change provides an opportunity for VMware to prove its value and for competing vendors to stake their claim to a large on-premises Virtualization market. Beyond the big three, four or five public cloud providers, there are a plethora of smaller public clouds that offer their own unique value. Whether it is Digital Ocean with an easy consumption model or OVH jumping into the GPU-on-demand market for AI training, there is a public cloud platform for many different specialised use cases. Each cloud provider makes a large up-front investment in platform, their technology, and often their real estate. The investment is only to generate a return for their founders, if the market doesn’t adopt their services, then the provider’s lifespan is very finite. Sooner or later the market will drive towards a sustainable population of cloud providers delivering the services that help their clients.
One challenge to using multiple clouds is that there is little standardization of the services across clouds. In fact, public cloud providers aim to lock customers into their cloud by providing unique features and value. The unique value may be in providing developer productivity or in offering unique software licensing opportunities. Anywhere a business uses this unique cloud value to provide business value, the cost of leaving the specific cloud provider increases. There is an argument that using the lowest common denominator of cloud, the virtual machine or container, is a wise move to allow cloud platform choice. A database server in a VM is much easier to move between clouds that migrating from one cloud’s managed database service to a different provider. If the ability to do cloud arbitrage is important, then you need your applications to be portable and not locked to one cloud platform by its unique features and value.
Whether there are too many clouds is a matter of perspective and opinion. Time will tell whether there are too many cloud providers and whether standardization of cloud services will evolve. Right now, some companies will commit to a single cloud provider and seek to gain maximum value form that one cloud while other companies play the field and seek to gain separate value from each cloud. We are certainly seeing discussions about private cloud as an option for many applications and a concern as the incumbent primary provider is changing approach. Will we see more clouds over time or fewer?
You Don’t Need Post-Quantum Crypto Yet
Oct 15, 2024
With the advent of quantum computers, the likelihood that modern encryption is going to be invalidated is a possibility. New standards from NIST have arrived that have ushered in the post-quantum era. You don’t need to implement them yet but you need to be familiar with them. Tom Hollingsworth is joined by Jennifer Minella, Andrew Conry-Murray, and Alastair Cooke in this episode to discuss why post-quantum algorithms are needed, why you should be readying your enterprise to use them, and how best to plan your implementation strategy.
With the advent of quantum computers, the likelihood that modern encryption is going to be invalidated is a possibility. New standards from NIST have arrived that have ushered in the post-quantum era. You don’t need to implement them yet but you need to be familiar with them. Tom Hollingsworth is joined by JJ MInella, Drew Conry-Murray, and Alastair Cooke in this episode to discuss why post-quantum algorithms are needed, why you should be readying your enterprise to use them, and how best to plan your implementation strategy.
The physics behind using quantum computers may be complicated but the results for RSA-based cryptography are easy to figure out. Once these computers reach a level of processing power and precision that allows them to instantly factor numbers it will invalidate the current methods of encryption key generation. That means that any communication using RSA-style keys will be vulnerable.
Thankfully the tech industry has known about this for years. The push to have NIST implement new encryption standards has been going on for the past two years. The candidates were finalized in mid-2024 and we’re already starting to see companies adopting them for use. This is hopeful because it means that we will have familiarity with the concepts behind the methods used before the threshold is reached that will force us to use these new algorithms.
Does this mean that you need to move away from using traditional RSA methods today? No, it doesn’t. What it does mean is that you need to investigate the new NIST standards and understand when and how they can be implemented in your environment and whether or not any additional hardware will be needed to support that installation.
As discussed, the time to figure this out is now. You have a runway to get your organization up to speed on these new technologies without the pain of a rushed implementation. Quantum computers may not be ready to break things apart today but the rate at which they are improving means it is only a matter of time before the day when you’ll need to switch over to prevent a lot of chaos with your encrypted data and communications.
Network Automation Is More Than Just Tooling
Oct 08, 2024
The modern enterprise network automation strategy is failing. This is due in part to a collection of tools masquerading as an automation solution. In this episode, Tom Hollingsworth is joined by Scott Robohn, Bruno Wollmann, and special guest Mike Bushong of Nokia to discuss the current state of automation in the data center. They discuss how tools are often improperly incorporated as well as why organizations shouldn’t rely on just a single person or team to affect change. They also explore ideas around Nokia Event-Driven Automation (EDA), a new operations platform dedicated to solving these issues.
The focus for most enterprises of “work reduction” when it comes to automation projects has a very short lifespan. As soon as people are satisfied they have saved themselves some time with their daily work they have a hard time translating that into a more strategic solution. Stakeholders want automation to save time and money, not just make someone’s job easier.
Also at stake is the focus on specific tools instead of platforms. Tools can certainly make things easier but there is very little integration between them. This means that when a new task needs to be automated or a new department wants to integrate with the system more work is required for the same level out output. Soon, the effort that goes into maintaining the automation code is more than the original task that was supposed to be automated.
The guests in this episode outline some ideas that can help teams better take advantage of automation, such as ensuring the correct focus is on the end goal and not just the operational details of the work being done. They also discuss Nokia Event-Driven Automation (EDA), which is a new operations platform that helps reimagine how data center network operations should be maintained and executed. The paradigm shift under the hood of Nokia EDA can alleviate a lot of the issues that are present in half-hearted attempts at automation and lead to better network health and more productive operations staff.
Data Infrastructure Is A Lot More Than Storage
Oct 01, 2024
The rise of AI and the importance of data to modern businesses has driven us too recognize that data matters, not storage. This episode of the Tech Field Day podcast focuses on AI data infrastructure and features Camberley Bates, Andy Banta, David Klee, and host Stephen Foskett, all of whom will be attending our AI Data Infrastructure Field Day this week. We’ve known for decades that storage solutions must provide the right access method for applications, not just performance, capacity, and reliability. Today’s enterprise storage solutions have specialized data services and interfaces to enable AI workloads, even as capacity has been driven beyond what we’ve seen in the past. Power and cooling is another critical element, since AI systems are optimized to make the most of expensive GPUs and accelerators. AI also requires extensive preparation and organization of data as well as traceability and records of metadata for compliance and reproducibility. Another question is interfaces, with modern storage turning to object stores or even vector database interfaces rather than traditional block and file. AI is driving a profound transformation of storage and data.
Infrastructure Beyond Storage
The rise of AI has fundamentally shifted the way we think about data infrastructure. Historically, storage was the primary focus, with businesses and IT professionals concerned about performance, capacity, and reliability. However, as AI becomes more integral to modern business operations, it’s clear that data infrastructure is about much more than just storage. The focus has shifted from simply storing data to managing, accessing, and utilizing it in ways that support AI workloads and other advanced applications.
One of the key realizations is that storage, in and of itself, is not the end goal. Data is what matters. Storage is merely a means to an end, a place to put data so that it can be accessed and used effectively. This shift in perspective has been driven by the increasing complexity of AI workloads, which require not just vast amounts of data but also the ability to access and process that data in real-time or near real-time. AI systems are highly dependent on the right data being available at the right time, and this has led to a rethinking of how data infrastructure is designed and implemented.
In the past, storage systems were often designed with a one-size-fits-all approach. Whether you were running a database, a data warehouse, or a simple file system, the storage system was largely the same. But AI has changed that. AI workloads are highly specialized, and they require storage systems that are equally specialized. For example, AI systems often need to access large datasets quickly, which means that traditional storage systems that rely on spinning disks or even slower SSDs may not be sufficient. Instead, AI systems are increasingly turning to high-performance storage solutions that can deliver the necessary bandwidth and low latency.
Moreover, AI workloads often require specialized data services that go beyond simple storage. These include things like data replication, data reduction, and cybersecurity features. AI systems also need to be able to classify and organize data in ways that make it easy to access and use. This is where metadata management becomes critical. AI systems need to be able to track not just the data itself but also the context in which that data was created and used. This is especially important for compliance and reproducibility, as AI systems are often used in regulated industries where traceability is a legal requirement.
Another important aspect of AI data infrastructure is the interface between the storage system and the AI system. Traditional storage systems often relied on block or file-based interfaces, but AI systems are increasingly turning to object storage or even more specialized interfaces like vector databases. These new interfaces are better suited to the needs of AI workloads, which often involve large, unstructured datasets that need to be accessed in non-linear ways.
Power and cooling are also critical considerations in AI data infrastructure. AI systems are highly resource-intensive, particularly when it comes to GPUs and other accelerators. These systems generate a lot of heat and consume a lot of power, which means that the data infrastructure supporting them needs to be optimized for energy efficiency. This has led to a shift away from traditional spinning disks, which consume a lot of power, and towards more energy-efficient storage solutions like SSDs and even tape for long-term storage.
The rise of AI has also blurred the lines between storage and memory. With the advent of technologies like CXL (Compute Express Link), the distinction between memory and storage is becoming less clear. AI systems often need to access data so quickly that traditional storage solutions are not fast enough. In these cases, data is often stored in memory, which offers much faster access times. However, memory is also more expensive and less persistent than traditional storage, which means that data infrastructure needs to be able to balance these competing demands.
In addition to the technical challenges, AI data infrastructure also needs to address the growing need for traceability and compliance. As AI systems are increasingly used to make decisions that impact people’s lives, whether in healthcare, finance, or other industries, there is a growing need to be able to trace how those decisions were made. This requires not just storing the data that was used to train the AI system but also keeping detailed records of how that data was processed and used. This is where metadata management becomes critical, as it allows organizations to track the entire lifecycle of the data used in their AI systems.
In conclusion, AI is driving a profound transformation in the way we think about data infrastructure. Storage is no longer just about performance, capacity, and reliability. It’s about managing data in ways that support the unique needs of AI workloads. This includes everything from specialized data services and interfaces to energy-efficient storage solutions and advanced metadata management. As AI continues to evolve, so too will the data infrastructure that supports it, and organizations that can adapt to these changes will be well-positioned to take advantage of the opportunities that AI presents.
AI and Cloud Demand a New Approach to Cyber Resilience featuring Commvault
Sep 24, 2024
As companies are exposed to more and more attackers, they’re realizing that cyber resilience is increasingly important. On this episode of the Tech Field Day Podcast, presented by Commvault, Senior Director of Product and Ecosystem Strategy Michael Stempf joins Justin Warren, Karen Lopez, and Stephen Foskett to discuss the growing challenges companies face in today’s cybersecurity landscape. As more organizations transition to a cloud-first operation, they’re recognizing the heightened exposure of their data protection strategies to global compliance mandates like DORA and SCI. Adding to this complexity is the emerging threat of AI, raising important questions about how businesses can adapt and maintain resilience in the face of these evolving risks.
In today’s rapidly evolving cybersecurity landscape, companies are increasingly recognizing the importance of cyber resilience, especially as they transition to cloud-first operations. The shift to cloud environments has exposed organizations to new risks, including compliance mandates like DORA and SOCI, which require more stringent data protection strategies. Additionally, the rise of AI introduces further complexities, as businesses must now consider how AI can both enhance and threaten their cybersecurity efforts. The conversation around cyber resilience is no longer just about preventing attacks but ensuring that organizations can recover quickly and effectively when breaches inevitably occur.
One of the key challenges in achieving cyber resilience is the lack of a clear, standardized definition of what it means to be resilient in the face of cyber threats. Unlike disaster recovery, which has well-established methodologies, cyber resilience is still a moving target. The nature of cyberattacks, which are often malicious and unpredictable, makes it difficult to apply traditional disaster recovery strategies. For example, while a natural disaster like a tornado may damage infrastructure, it doesn’t actively seek to corrupt data or systems. In contrast, a cyberattack forces organizations to question the integrity of their entire environment, from networks to cloud architectures. This uncertainty underscores the need for continuous testing and preparedness to ensure that recovery is possible after an attack.
The complexity of modern IT environments, particularly with the widespread adoption of hybrid and multi-cloud setups, further complicates the task of maintaining cyber resilience. As organizations spread their data across various cloud platforms and on-premises systems, the number of moving parts increases, making it difficult for administrators to manage and protect everything manually. Automation and orchestration tools are becoming essential to handle the scale and complexity of these environments. Solutions like Commvault’s clean room recovery, which allows for dynamic scaling in the cloud and cross-platform data restoration, are helping to simplify the recovery process and reduce the time it takes to bounce back from a cyber incident.
Compliance is another critical factor in the conversation about cyber resilience. With regulations varying across jurisdictions and industries, organizations must navigate a complex web of requirements to ensure they are protecting their data appropriately. The involvement of legal teams in discussions about data protection is becoming more common, as companies recognize the legal and financial risks associated with non-compliance. Tools that can help organizations track and manage their compliance obligations, without exposing sensitive data, are becoming increasingly valuable. Commvault’s approach, which focuses on analyzing metadata rather than customer data, allows organizations to stay compliant while minimizing the risk of data exposure.
Finally, the role of AI in cybersecurity cannot be ignored. While AI offers powerful tools for automating tasks and identifying threats, it also presents new risks, particularly when it comes to data privacy and security. Responsible AI practices, like those advocated by Commvault, emphasize the importance of using AI in a way that respects customer data and focuses on operational improvements rather than invasive data scanning. By leveraging AI to enhance breach management and compliance tracking, organizations can improve their cyber resilience without compromising the integrity of their data. As AI continues to evolve, it will be crucial for companies to adopt thoughtful, responsible approaches to integrating these technologies into their cybersecurity strategies.
Hardware innovation at the edge is driven by diverse and challenging environments found outside traditional data centers. This episode of the Tech Field Day podcast features Jack Poller, Stephen Foskett, and Alastair Cooke considering the special requirements of hardware in edge computing prior to Edge Field Day this week. Edge locations, including energy, military, retail, and more, demand robust, tamper-resistant hardware that can endure harsh conditions like extreme temperatures and vibrations. This shift is fostering new hardware designs, drawing inspiration from industries like mobile technology, to support real-time data processing and AI applications. As edge computing grows, the interplay between durable hardware and adaptive software, including containerized platforms, will be crucial for maximizing efficiency and unlocking new capabilities in these dynamic environments.
In the new world of edge computing, hardware innovation is rapidly emerging. Unlike the standardized, controlled environments of data centers, edge locations present a diverse array of challenges that necessitate unique hardware solutions. This diversity is driving a wave of innovation in server and infrastructure hardware that hasn’t been seen in traditional data centers for quite some time.
The edge is essentially defined as any location that is not a data center or cloud environment. This could range from the top of a wind turbine to a main battle tank on a battlefield, a grocery store, or even underneath the fryers at a quick-serve restaurant. Each of these locations has distinct physical and operational requirements, such as varying power supplies, cooling needs, and network connectivity. Unlike data centers, where the environment is tailored to be conducive to server longevity and performance, edge environments are often hostile, with factors like extreme temperatures, vibrations, and even potential tampering by humans.
This necessitates a shift in design paradigms. Edge hardware must be robust enough to withstand these harsh conditions. For instance, the vibrations in a main battle tank are far more severe than what typical data center hardware can endure. Additionally, edge devices must be secure against physical tampering and theft, considerations that are not as critical in the controlled environment of a data center.
Interestingly, the concept of edge computing is not entirely new. Decades ago, mini-computers were deployed in grocery stores, often encased in large, durable boxes to protect against spills and physical damage. Today, the resurgence of edge computing is driven by the explosion of data and the need for real-time processing, particularly with the advent of AI. In scenarios like oil and gas exploration, where seismic data needs to be processed immediately, edge computing offers significant efficiency gains by eliminating the need to transport vast amounts of data back to a central location.
The hardware used at the edge often borrows from other industries. For example, the form factors of edge servers are reminiscent of industrial computers and fixed wireless devices, featuring big heat sinks, die-cast chassis, and power-over-Ethernet capabilities. These designs are optimized for durability and low power consumption, essential for edge environments.
Moreover, advancements in mobile technology are influencing edge hardware. Mobile devices, with their powerful yet low-power GPUs and neural processing capabilities, are paving the way for AI applications at the edge. This convergence of technologies means that edge servers are increasingly resembling high-performance laptops, repurposed to handle the unique demands of edge computing.
On the software side, virtualization and containerization are transforming how applications are deployed at the edge. However, these technologies must be adapted to the constraints of edge environments, such as intermittent connectivity and limited computational resources. Traditional assumptions about network reliability and computational power do not hold at the edge, necessitating innovative approaches to software development and deployment.
The synergy between hardware and software is crucial for the success of edge computing. As edge locations become more general-purpose, capable of running multiple applications over their lifetime, the need for flexible, containerized platforms grows. However, managing these platforms in intermittently connected environments poses significant challenges in terms of distribution and control.
AI at the edge is a particularly hot topic. The need to process data locally to avoid the inefficiencies of transporting it to a central location is driving the development of edge AI hardware. These devices must balance power consumption, cooling, and data throughput within compact, durable form factors. The IT industry’s relentless drive to make technology smaller, more powerful, and more efficient is enabling these advancements.
The edge represents a dynamic and challenging frontier for IT innovation. The unique requirements of edge environments are driving significant advancements in hardware design, influenced by technologies from various fields. As AI and other data-intensive applications move to the edge, the synergy between innovative hardware and adaptive software will be key to unlocking new efficiencies and capabilities.
Although AI can be quite useful, it seems that the promise of generative AI has lead to irrational exuberance on the topic. This episode of the Tech Field Day podcast, recorded ahead of AI Field Day, features Justin Warren, Alastair Cooke, Frederic van Haren, and Stephen Foskett considering the promises made about AI. Generative AI was so impressive that it escaped from the lab, being pushed into production before it was ready for use. We are still living with the repercussions of this decision on a daily basis, with AI assistants appearing everywhere. Many customers are already frustrated by these systems, leading to a rapid push-back against the universal use of LLM chatbots. One problem the widespread mis-use of AI has solved already is the search for a driver of computer hardware and software sales, though this already seems to be wearing off. But once we take stock of the huge variety of tools being created, it is likely that we will have many useful new technologies to apply.
There is a dichotomy in artificial intelligence (AI) between the hype surrounding generative AI and the practical realities of its implementation. While AI has the potential to address various challenges across industries, the rush to deploy these technologies has often outpaced their readiness for real-world applications. This has led to a proliferation of AI systems that, while impressive in theory, frequently fall short in practice, resulting in frustration among users and stakeholders.
Generative AI, particularly large language models (LLMs), has captured the imagination of marketers and technologists alike. The excitement surrounding these tools has led to their rapid adoption in various sectors, from customer service to content creation. However, this enthusiasm has not been without consequences. Many organizations have integrated AI into their operations without fully understanding its limitations, leading to a backlash against systems that fail to deliver on their promises. The expectation that AI can solve all problems has proven to be overly optimistic, as many users encounter issues with accuracy, reliability, and relevance in AI-generated outputs.
The initial excitement surrounding AI technologies can be likened to previous hype cycles in the tech industry, where expectations often exceed the capabilities of the technology. The current wave of AI adoption is no different, with many organizations investing heavily in generative AI without a clear understanding of its practical applications. This has resulted in a scenario where AI is seen as a panacea for various business challenges, despite the fact that many tasks may be better suited for human intervention or simpler automation solutions.
One of the critical issues with the current AI landscape is the tendency to automate processes that may not need automation at all. This can lead to a situation where organizations become entrenched in inefficient practices, making it more challenging to identify and eliminate unnecessary tasks. The focus on deploying AI as a solution can obscure the need for organizations to critically assess their processes and determine whether they are truly adding value.
Moreover, the rapid pace of AI development raises concerns about the sustainability of these technologies. As companies race to innovate and bring new AI products to market, there is a risk that many of these solutions will not be adequately supported or maintained over time. This could lead to a situation where organizations are left with outdated or abandoned technologies, further complicating their efforts to leverage AI effectively.
Despite these challenges, there is a consensus that AI has the potential to drive significant advancements in various fields. The ability of AI to analyze vast amounts of data and identify patterns can lead to improved decision-making and efficiency in many areas. However, realizing this potential requires a more nuanced understanding of AI’s capabilities and limitations, as well as a commitment to responsible implementation.
The conversation around AI also highlights the importance of data as a critical component of successful AI applications. While the algorithms and models are essential, the quality and relevance of the data fed into these systems are equally crucial. Organizations must prioritize data governance and management to ensure that their AI initiatives yield meaningful results.
As the AI landscape continues to evolve, it is essential for stakeholders to remain vigilant and critical of the technologies they adopt. The promise of AI is significant, but it is vital to approach its implementation with a clear understanding of its limitations and the potential consequences of over-reliance on automated solutions. By fostering a culture of critical thinking and continuous improvement, organizations can better navigate the complexities of AI and harness its potential to drive meaningful change.
Ethernet is not Ready to Replace InfiniBand Yet
Sep 03, 2024
AI networking is making huge strides toward standardization but Ethernet isn’t ready to displace the leading incumbent InfiniBand yet. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by Scott Robohn and Ray Lucchesi to discuss the state of Ethernet today and how it is continuing to improve. The guests discuss topics such as the dominance of InfiniBand, why basic Ethernet isn’t suited to latency-sensitive workloads, and how the future will improve the technology.
AI networking is making huge strides toward standardization but Ethernet isn’t ready to displace the leading incumbent InfiniBand yet. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by Scott Robohn and Ray Lucchesi to discuss the state of Ethernet today and how it is continuing to improve. The guests discuss topics such as the dominance of InfiniBand, why basic Ethernet isn’t suited to latency-sensitive workloads, and how the future will improve the technology.
InfiniBand has been the dominant technology for AI networking since NVIDIA asserted itself as the leader in the market. The reasons for this a varied. NVIDIA acquired the technology from their 2019 acquisition of Mellanox. InfiniBand has been used extensively in high performance computing (HPC) systems for a number of years. Using it for AI, which is a very hungry application, was a natural fit. Since then, InfiniBand has continued to be the preferred solution due to low latency and the lossless nature of packet exchange.
Companies such as Cisco, Broadcom, and Intel have championed the use of Ethernet as an alternative to InfiniBand for GPU-to-GPU communications. They’ve even founded a consortium dedicated to standardizing Ethernet fabrics focused on AI. However, even though Ethernet is a very flexible technology it’s not uniquely suited to AI networking in the same way that InfiniBand has shown. Lossy transmissions and high overhead are only two of the major issues that plague standard Ethernet when it comes to latency-sensitive information exchange. The Ultra Ethernet Consortium was founded to provide mechanisms to make Ethernet more competitive in the AI space but it still has a lot of work to do in order to standardize the technology.
The future of Ethernet is bright. InfiniBand is seemingly being put into maintenance mode as even NVIDIA has started to develop Ethernet options with Spectrum-X using Bluefield-3 DPUs. Cloud providers offering AI services are also mandating the use of standard cost-effective Ethernet over proprietary InfiniBand. AI workloads are also undergoing significant changes due to the nature of infrastructure catching up to their needs. As the technology and software continue to develop there is no doubt that Ethernet will eventually return to being the dominant communications technology. However, that change won’t happen for a few years to come.
The current hype about building massive generative AI models with massive hardware investment is just one aspect of AI. This episode of the Tech Field Day podcast features Frederic Van Haren, Karen Lopez, Marian Newsome, and host Stephen Foskett taking a different perspective on the larger world of AI. Our last episode suggested that AI as it is currently being hyped is a fad, but the bigger world of AI is absolutely real. Large language models are maturing rapidly and even generative AI is getting better by the month, but we are rapidly seeing the reality of the use cases for this technology. All neural networks use patterns in historical data to infer results, so any AI engine could hallucinate. But traditional AI is much less susceptible to errors than the much-hyped generative AI models that are capturing the headlines today. AI is a tool that augments our knowledge and decision making, but it doesn’t replace human intelligence. There is a whole world of AI applications that are productive, responsible, and practical, and these are most certainly not a fad.
The current hype surrounding massive generative AI models and the substantial hardware investments they require is just one facet of the broader AI landscape. While the media often focuses on these large language models and the billions of dollars spent on supercomputers to support them, AI encompasses much more than this. The reality is that AI is not a fad – it is a multifaceted tool that is rapidly evolving and finding practical applications across various industries.
AI can be divided into two main phases: training and inference. The training phase involves using extensive datasets and significant computational power, often requiring numerous GPUs, to build models. This phase is typically handled by a few large organizations with the resources to manage such complexity. On the other hand, the inference phase, where these models are applied in real-world scenarios, is less resource-intensive and more accessible to consumers and enterprises. This division highlights that while the development of AI models may be complex and resource-heavy, their application can be straightforward and widely beneficial.
The demand for AI is driven by consumers and enterprises seeking to simplify and enhance their operations. This demand ensures that AI is not a passing trend but a technology with staying power. However, the term “AI” is often used as a catch-all phrase, leading to confusion about its true capabilities and applications. For instance, generative AI, which includes models like ChatGPT, is just one type of AI. These models can produce impressive and convincing outputs but are also prone to errors and “hallucinations”—generating incorrect or nonsensical information based on the data they were trained on.
Traditional AI, which has been in use for years in various industries, is generally more reliable and less prone to such errors. Applications of traditional AI include anomaly detection in manufacturing, video analysis in retail, and security. These use cases demonstrate AI’s practical and responsible applications, which are far from being a fad. For example, AI is used in agriculture to monitor crop health and improve yields, a task that does not require the massive computational resources associated with generative AI.
The perception of AI as a fad is partly due to the overhyped and sometimes half-baked applications of generative AI that capture public attention. These applications often promise more than they can deliver, leading to skepticism. However, the underlying technology of AI is robust and continues to mature, offering valuable solutions in various fields. The speed of innovation in AI is accelerating, and while this can lead to unrealistic expectations, it also means that practical applications are continually emerging.
AI is a tool that augments human knowledge and decision-making rather than replacing it. This distinction is crucial for understanding AI’s role in our lives. For instance, AI can assist in generating documentation, analyzing code, or improving search capabilities within an organization. These applications enhance productivity and efficiency without replacing the need for human oversight and expertise.
The trust factor in AI is also significant. As AI becomes more integrated into everyday technologies, it is essential to market and implement it responsibly. This includes ensuring that AI systems are transparent, reliable, and used ethically. For example, non-generative AI systems, which do not generate new content but analyze existing data, are generally more trustworthy and less prone to errors.
AI is not a fad; it is a powerful tool with a wide range of applications that are already making a significant impact. While the hype around generative AI may lead to some disillusionment, the broader field of AI continues to offer practical, responsible, and valuable solutions. As AI technology evolves, it will become even more integrated into various aspects of our lives, enhancing our capabilities and helping us solve complex problems. The key is to approach AI with a clear understanding of its strengths and limitations, ensuring that it is used to augment human intelligence and decision-making responsibly.
Although AI is certain to transform society, not to mention computing, what we know of it is likely to change and last much longer. This episode of the Tech Field Day podcast brings together Glenn Dekhayser, Alastair Cooke, Allyson Klein, and Stephen Foskett to discuss the real and changing world of AI. Looking at AI infrastructure today, we see massive clusters of GPUs being deployed in the cloud and on-premises to train ever-larger language models, but how much business value do these clusters have long-term? It seems that the true transformation promised by LLM and GenAI will be realized once models are applied across industries with RAG or tuning rather than developing new models. Ultimately AI is a feature of a larger business process or application rather than being a product in itself. We can certainly see that AI-based applications will be transformative, but the vast investment required to build out AI infrastructure to date might never be recouped. Ultimately there is a future for AI, but not the way we have been doing it to date.
We’re Talking About the Wrong Things When It Comes to AI
The current landscape of artificial intelligence (AI) is undergoing rapid transformation, and what we know of it today may soon be considered outdated. The conversation around AI has shifted significantly, especially with the rise of generative AI, which has captured the public’s imagination and driven massive investments in AI infrastructure. However, the sustainability and long-term business value of these investments are increasingly being questioned.
Initially, the excitement around AI was centered on its integration into various applications, promising to automate and enhance tasks such as log file analysis and predictive maintenance. AI’s ability to process large datasets quickly and identify patterns or anomalies offered clear business benefits, such as reducing unplanned downtime and improving service resolution times. This practical application of AI was seen as a valuable tool for enterprises.
However, the focus has shifted towards generative AI and the development of ever-larger language models. This shift has led to discussions about the power consumption, global trade in GPUs, and the phenomenon of AI “hallucinations”—where AI generates incorrect or nonsensical outputs. These issues pose significant challenges for enterprise IT as they attempt to integrate AI into business processes.
The current approach to AI, characterized by massive GPU clusters and high power consumption, is not seen as sustainable. The investments in AI infrastructure are substantial, with companies spending hundreds of millions of dollars to build single foundational models. This approach is not scalable and does not deliver significant business value to most organizations. The high costs and limited returns suggest that this model of AI development may not be viable in the long term.
There is a growing recognition that AI should be viewed as a feature of larger business processes or applications rather than a standalone product. The true transformation promised by AI will likely be realized when models are applied across industries with techniques such as retrieval-augmented generation (RAG) or fine-tuning existing models, rather than developing new ones from scratch. This approach can provide more immediate and practical business benefits without the need for massive infrastructure investments.
The rapid pace of AI development also means that the technology is constantly evolving. Enterprises are not yet ready for full-scale AI model training, as they often lack the necessary data preparation and infrastructure. Most enterprises are currently using existing models and focusing on RAG or fine-tuning, but even these approaches present challenges. The expectations for AI often exceed the current capabilities, leading to a mismatch between anticipated and actual outcomes.
The future of AI will likely involve more efficient and scalable solutions. Innovations such as on-device inferencing and smaller, more optimized models are already showing promise. These developments could reduce the need for large-scale GPU clusters and make AI more accessible and practical for a wider range of applications.
In conclusion, while AI is certain to transform society and computing, the current approach to AI infrastructure and development is not sustainable. The focus should shift towards integrating AI as a feature within larger business processes and finding more efficient ways to deploy AI technologies. The rapid pace of change in AI means that what we know of it today may soon be considered a fad, but the underlying potential of AI to drive business value and innovation remains strong.
AI Has A Place In Networking Operations
Aug 13, 2024
Generative AI tools and features are becoming an indispensable part of the way operations teams do their jobs. Tom Hollingsworth is joined by Keith Parsons, Kerry Kulp, and Ron Westfall for this episode discussing the rise of AI tools and how they are implemented. The guests talk about how AI tools should be used by teams to increase their capabilities. They also discuss where AI still has a lot of room to grow and how to avoid traps that could cause issues for stakeholders and champions.
Generative AI tools and features are becoming an indispensable part of the way operations teams do their jobs. Tom Hollingsworth is joined by Keith Parsons, Kerry Kulp, and Ron Westfall for this episode discussing the rise of AI tools and how they are implemented. The guests talk about how AI tools should be used by teams to increase their capabilities. They also discuss where AI still has a lot of room to grow and how to avoid traps that could cause issues for stakeholders and champions.
A huge place that AI tools can assist practitioners is with massive data parsing and analysis. Logging can produce an enormous amount of data that must be indexed and sifted before patterns can emerge and be acted upon. With AI tools, log analysis can be done in real time thanks to algorithms acting on the data as it is written and not after it has been stored for hours or days. The capability to find anomalous patterns and act on them quickly can enhance team productivity.
One area of concern for professionals just coming into the workforce is the possibility of relying too much on AI to make decisions and removing some of the learning from the process of gaining experience. When AI fixes mistakes or handles some of basic tasks departments run the risk of having employees that are not trained to recognize these issues that can signal the onset of a greater problem. This gap in skills could lead to fewer opportunities to advance in the company. Therefore, it is important to realize that while AI is fixing issues you also must learn what the AI is doing so you recognize where it is adding value.
AI-enabled tools must also work hand-in-hand with professionals to get feedback on the quality of a solution. AI can only work when someone with expert knowledge sees the assistance or the solution and provides critical feedback to ensure that the algorithm is working properly. Careful analysis results in fewer hallucinations and more appropriate responses instead of wild suggestions sourced from dubious locations. The people in the system need to tune the results to fit their specific needs, thereby increasing the accuracy of the platform as time goes by.
Lastly, these solutions need to be tied to business outcomes. Suggesting AI for the sake of making things more technical or simply reducing headcount can lead to improperly implemented solutions and subpar performance. Organizations should analyze their preferred outcomes and set expectations early to provide appropriate measurement and recommended milestones to ensure projects stay on time and continue to remain focused on goals.
Network as a Service is More of a Financial Model
Aug 06, 2024
Network-as-a-Service (NaaS) is a very popular topic in the modern enterprise. It promises a way to consume networking technologies in the same way that one would purchase cloud computing by only charging users for what they need. In this episode of the Tech Field Day podcast, Jordan Martin, Micheline Murphy, and Robb Boyd join Tom Hollingsworth as they discuss the various ways that Network-as-a-Service can be expressed in an organization. They debate the merits of the operational model versus the financial aspects and how NaaS blends into the wider industry trends.
Network-as-a-Service is a way to help organizations take advantage of elastic pricing and operational simplicity. Much like the managed service providers (MSP) years ago, NaaS companies allow you to effectively rent the hardware from a company that will deploy and manage it for you. If that sounds more like leasing equipment you’re not far from the truth. The panelists discuss how the shift in terminology has transformed a financial transaction into more of a status symbol in enterprise IT.
NaaS isn’t something that is being theorized though. Many companies are doing it today, and not all of them look like the total replacement model. Some are doing it in more focused areas, such as SD-WAN and SASE providers handling the back-end infrastructure and leaving the management of on-premises devices to the customer. There are also avenues for providers to only do a portion of the infrastructure, such as firewalls or wireless access points. As companies spend more time developing products and solutions the number of options available to those that want to implement NaaS will only continue to grow.
The wider industry is focused on providing flexible models that allow more customers to add technology while also reducing the need for capital expenditure (CapEx) budgeting. With more users working remotely the need for massive office upgrades is subsiding. That means more opportunities for providers to come in and offer compelling solutions at lower price points. However, companies need to understand what they’re getting into and how it could affect them in the future before they decide that going with the service model is the right decision.
Despite the hype about modern applications, the mainframe remains central to enterprise IT and is rapidly adopting new technologies. This episode of the Tech Field Day podcast features Steven Dickens, Geoffrey Decker, and Jon Hildebrand talking to Stephen Foskett about the modern mainframe prior to the SHARE conference. The modern datacenter is rapidly adopting technologies like containerization, orchestration, and artificial intelligence, and these are coming to the mainframe world as well. And the continued importance of mainframe applications, especially in finance and transportation, makes the mainframe more important than ever. There is a tremendous career opportunity in mainframes as well, with recent grads commanding high salaries and working with exciting modern technologies. Modern mainframes run Linux natively, support OpenShift and containers, and support all of the latest languages and programming models in addition to PL1, Cobol, DB2, and of course zOS. We’re looking forward to bringing the latest in the mainframe space from SHARE to our audience.
Despite the rapid evolution and adoption of modern applications in enterprise IT, the mainframe continues to play a pivotal role, especially in industries such as finance and transportation. The mainframe is not only enduring but also evolving by integrating new technologies like containerization, orchestration, and artificial intelligence. This integration is crucial for maintaining operational resilience, enhancing cybersecurity, and improving application development through DevOps practices.
Mainframes are the backbone of many critical systems, handling vast amounts of transactional data for credit card processing, airline operations, government departments, and tax offices. The reliability and robustness of mainframes in these high-stakes environments underscore their continued relevance. The recent outages experienced by cloud service providers like CrowdStrike and Microsoft highlight the importance of operational resilience, an area where mainframes excel.
The adoption of AI in mainframe environments is particularly noteworthy. AI is being infused into various tools to enhance coding and operational efficiencies. Major players like BMC, IBM, and Broadcom have made significant announcements regarding their AI initiatives, which are aimed at improving the mainframe’s capabilities. The integration of AI allows for real-time decision-making processes, such as fraud detection during credit card transactions, directly within the mainframe environment.
The educational landscape around mainframes is also evolving. Institutions like Northern Illinois University (NIU) are reviving their mainframe curricula to address the growing demand for skilled mainframe developers. Courses in assembler, COBOL, and other mainframe-related subjects are being reintroduced to prepare the next generation of mainframe professionals. Despite the historical decline in mainframe-focused education, the dire need for these skills in the industry is prompting universities to reconsider their course offerings. The career prospects in the mainframe domain are promising. Recent graduates with mainframe skills, particularly in COBOL, are highly sought after by major corporations such as Citibank, Wells Fargo, and Walmart. The salaries for these positions are competitive, often approaching six figures right out of college. This demand is driven by the aging workforce of current mainframe professionals and the critical nature of mainframe applications in enterprise environments.
Technologically, modern mainframes are versatile. They can run multiple operating systems, including Linux distributions like SLES, Debian, RHEL, and Ubuntu, as well as traditional mainframe operating systems like z/OS. This versatility extends to the ability to run containerized applications using platforms like OpenShift directly on the mainframe. This reduces latency and enhances performance by bringing cloud workloads closer to the mainframe’s robust processing capabilities. The mainframe’s ability to handle modern workloads is exemplified by its support for containerized Java applications and the integration of open-source packages like Podman. The hardware accelerators built into mainframes enable the efficient execution of AI workloads, further enhancing their capabilities for modern enterprise needs.
The mainframe ecosystem is also seeing innovative solutions aimed at simplifying development and operations. For instance, companies like Pop-Up Mainframe are making it easier for developers to create and test applications on mainframes without needing extensive mainframe-specific knowledge. This aligns with the broader DevOps movement and facilitates the integration of mainframe environments into modern development workflows.
In summary, the mainframe is far from obsolete. It is a dynamic and evolving platform that continues to be central to enterprise IT. With its adoption of new technologies, robust educational programs, and promising career opportunities, the mainframe is well-positioned to remain a cornerstone of enterprise computing for years to come.
Network Engineering is a Dying Profession
Jul 23, 2024
Network Engineering isn’t the hottest profession on the block and people have expressed concerns that the profession is going to be subsumed into other disciplines in the near future. In this episode of the Tech Field Day podcast, Tom Hollingsworth joins Andy Lapteff and Remington Loose at the table to discuss the decline in network engineering roles. They also talk about changes in perceptions as well as the industry. They close out by discussing the future outlook for roles involving network engineering.
Andy leads off the podcast by talking about how he believes that he wouldn’t recommend anyone get into the network engineering profession. This is followed by agreement that younger professionals getting into the industry are more focused on careers that are glamorous, such as cybersecurity or AI. Even though network engineering has a higher pay scale and good job security, those starting out would rather do the more exciting things.
Tom jumps in to highlight that network engineering might not be growing but it is far from dying. Looking at careers like mainframe operators or COBOL programmers will show that no matter how old the technology might be there are still people that need to do the job. With the rise of cloud computing, people are training on new technologies and finding that some of the same skills they’ve needed in the past apply in the new role. That means that specialized knowledge is still critical no matter what the actual skill might be.
The real culprit is that engineering skills have been abstracted away as the focus of the operations have moved toward app-centric models and dwell less on the actual infrastructure. That means that people who have trained on those skills in the past will still be valuable even if they aren’t the lords of the datacenter that they previously might have been. Given the reduced prestige and continued long hours, on-call needs, and lack of recognition in the company, it’s a wonder that people want to be network engineers at all. But to say the field is dying is not accurate.
Open Source Helps Small Businesses Modernize Applications
Jul 16, 2024
Open-source platforms and managed services are a huge help when it comes to modernizing applications, especially for smaller businesses. This episode of the Tech Field Day podcast, recorded at AppDev Field Day, includes Jack Poller, Stephen Foskett, and Paul Nashawaty discussing the challenges and solutions for small businesses in modernizing applications. Small businesses often face significant challenges when it comes to modernizing their applications, primarily due to limited resources and the complexity of cutting-edge technologies. While larger enterprises might have the capacity to adopt sophisticated technologies like microservices, AI, and advanced security systems, smaller companies struggle to keep pace. However, the availability of open-source technologies and managed services provides a viable pathway for these businesses to modernize incrementally. By leveraging open-source platforms and engaging with managed services, small businesses can modernize their applications without the need for extensive in-house expertise or substantial upfront investment. This approach allows them to progressively adopt new technologies and improve their competitive position in the market.
Open-source platforms and managed services are increasingly becoming pivotal in aiding small businesses to modernize their applications. This trend is driven by the need for these businesses to update their systems without the heavy financial and resource burdens that typically accompany such transformations. Open-source solutions provide a cost-effective and flexible alternative to proprietary software, offering a wide range of tools and libraries that businesses can adapt to their specific needs. This is particularly beneficial for small businesses that may not have the extensive IT departments or budgets of larger corporations but still need to compete in a technology-driven market.
The use of managed services further complements the advantages offered by open-source technologies. Managed services allow businesses to outsource certain IT functions, such as application management, cloud services, and cybersecurity, to specialized providers. This not only helps small businesses manage costs more effectively by reducing the need for in-house IT staff but also ensures that they have access to the latest technologies and expertise. Managed service providers can offer scalable solutions that grow with the business, ensuring that IT capabilities align with business needs without upfront investments in hardware or software.
One significant challenge for small businesses looking to modernize their applications is the complexity of new technologies. Advanced solutions like microservices architectures, artificial intelligence (AI), and sophisticated security protocols can be daunting. However, open-source communities often provide extensive documentation, user forums, and support that can help small businesses navigate these complexities. By engaging with these communities, small businesses can access a wealth of knowledge and experience, reducing the learning curve associated with new technologies.
Moreover, open-source software often encourages innovation through community collaboration. Small businesses can benefit from the continuous improvements and innovations contributed by developers worldwide. This collaborative approach not only accelerates the development process but also introduces small businesses to best practices and emerging trends in software development.
However, adopting open-source software does come with challenges, such as the need for technical expertise to customize and maintain the software. This is where managed services play a crucial role. By partnering with providers that offer tailored support and services, small businesses can leverage the benefits of open-source software without needing to develop deep technical expertise internally. Managed service providers can handle the complex aspects of software integration, security, and compliance, allowing small businesses to focus on their core operations.
In conclusion, the combination of open-source platforms and managed services provides a powerful pathway for small businesses to modernize their applications. This approach not only helps manage costs and reduce complexity but also enables small businesses to tap into advanced technologies and innovate faster. As the digital landscape continues to evolve, small businesses that leverage these tools effectively will be better positioned to compete and succeed in the modern economy.
On-Premises Networks Need to Work Like Cloud Networks
Jul 09, 2024
On-premises networks are still very common for specialize applications and need to adopt cloud network operational models. In this episode, Tom Hollingsworth is joined by experts Ron Westfall, Chris Grundemann, and Jeremy Schulman as they discuss how to better implement these preferred methods. They also debate how each model has different requirements and may face headwinds in an enterprise.
On-premises networks are still very common for specialize applications and need to adopt cloud network operational models. In this episode, Tom Hollingsworth is joined by experts Ron Westfall, Chris Grundemann, and Jeremy Schulman as they discuss how to better implement these preferred methods. They also debate how each model has different requirements and may face headwinds in an enterprise.
The experts discuss how hybrid cloud is an operational model that helps organizations balance the needs of regulatory needs in addition to operational efficiencies. Not every company is a perfect fit for public cloud and hybrid models give a much better experience. Hybrid cloud also allows for better transitions for applications as organizations investigate moving more of their workloads into the cloud.
Another key distinction is ensuring that the operational model focuses on a service-oriented architecture. Cloud networks offer a limited feature set for services to force users to conform to their service offerings. For networks with more technical debt it is critical to investigate the requirements of your existing network to ensure compatibility with these ideas.
One of the newest applications that is driving this shift in operational models is AI. With the surge of companies that are adopting on-premises AI hardware clusters the method of using them most efficiently comes into question. Because there is no legacy model of AI outside of the modern application there is already a bias for cloud model operation. This means that organizations deploying AI clusters are starting to apply those ideas to their traditional networks.
In the end, you need to ensure that you know what your network requirements are before you try to adopt new ways to managing it. Otherwise you may find yourself with the worst of both worlds when you try to change the way you do things and make everyone unhappy.
Everything is the Cloud and The Cloud is Everything
Jul 02, 2024
The cloud operating model is everywhere these days, and just about everything is now called cloud. This episode of the Tech Field Day podcast, recorded live at Cloud Field Day 20, includes Stephen Foskett, Jeffrey Powers, Alastair Cooke, and Steve Puluka discussing the true meaning of the term cloud computing. Cloud has evolved from its initial definition by NIST in 2012. The cloud concept is ubiquitous, adopted from personal devices to industrial IoT and data centers. The cloud operating model abstracts the complexity of underlying infrastructure, allowing businesses to focus on their core differentiators. But even though the cloud is everywhere, the panelists concluded that while the cloud is everywhere, not everything is the cloud.
Everything in tech is called “cloud,” from personal devices to industrial IoT, data centers, and beyond. The evolution of cloud computing, which was formally defined by NIST in 2012, has seen the concept permeate various sectors, transforming how services are delivered and consumed. Initially, cloud services were the domain of large data centers operated by companies like AWS, Azure, and Alibaba. However, over the past decade, cloud principles have been adopted by smaller companies, the VMware community, and even personal users, making the cloud a universal operating model rather than just a location.
The core appeal of the cloud lies in its ability to abstract the complexities of underlying infrastructure, allowing businesses to focus on differentiating their services rather than managing hardware intricacies. This shift has led to a significant reduction in the need for detailed knowledge about hardware configurations, as cloud services handle these aspects seamlessly. The cloud operating model enables businesses to allocate resources more efficiently, focusing on application development and operational excellence rather than hardware maintenance.
The future of storage and computing is increasingly leaning towards a combination of cloud services and mobile devices. The younger generation, for instance, is more inclined to use mobile devices for tasks traditionally performed on laptops. This trend is supported by the cloud’s ability to provide a seamless experience across devices, ensuring that data and applications are accessible regardless of the hardware in use. This shift is evident in the rise of devices like Chromebooks, which rely heavily on cloud services for storage and application delivery.
In the enterprise realm, the cloud’s influence is equally profound. While data gravity and latency considerations still necessitate on-premises deployments for certain applications, the cloud operating model is becoming the standard. Modern applications are designed to accommodate the latency and caching mechanisms inherent in cloud environments, enabling seamless operation regardless of the physical location of the infrastructure. Legacy applications, while still present, are gradually being replaced or virtualized to fit into this new paradigm.
The edge, traditionally characterized by proprietary hardware, has also undergone a transformation. Today, edge locations utilize standard servers running virtual machines or containerized applications orchestrated by platforms like Kubernetes. This approach mirrors the cloud operating model, where local servers act as caches for cloud services, ensuring resilience and flexibility. The edge has, in many ways, become more cloud-like than traditional data centers, embracing the principles of abstraction and orchestration.
Despite these advancements, the cloud is not a one-size-fits-all solution. Certain applications, particularly those with stringent latency and data sovereignty requirements, may still necessitate on-premises deployments. However, the overarching trend is towards a cloud-centric model, where infrastructure is managed and consumed as a service, regardless of its physical location. This shift is driven by the need for agility, scalability, and cost-efficiency, which the cloud model inherently provides.
In conclusion, while not everything is the cloud, the cloud is indeed everywhere. It has become the default operating model for modern IT services, extending from personal devices to enterprise data centers and edge locations. The cloud’s principles of abstraction, orchestration, and service-based delivery have permeated all aspects of technology, making it an integral part of the digital landscape. As technology continues to evolve, the cloud will remain a central theme, shaping how services are delivered and consumed in an increasingly connected world.
GenAI is Revolutionizing the Enterprise
Jun 25, 2024
Generative AI will revolutionize enterprise IT, but not in the way people expect. This episode of the Tech Field Day podcast includes Stephen Foskett discussing the impact of GenAI with Jack Poller, Calvin Hendryx-Parker, and Josh Atwell at AppDev Field Day. The discussion centered around the potential impact of generative AI on enterprises, debating whether it will significantly transform business operations or merely offer incremental improvements. Generative AI is still in its infancy and may not yet provide revolutionary benefits, but there is great potential for AI in automating tasks and enhancing efficiencies despite challenges in implementation and validation. We must be realistic when it comes to the application of AI in enterprises, and it is important to understand the real capabilities and limitations, and the role of existing vendors in integrating AI functionalities into their products.
Generative AI (GenAI) is becoming a focal point of discussion across various industries, extending beyond software development and IT into the realms of business, marketing, and executive decision-making. The question remains whether GenAI can substantially impact enterprises or if it remains largely a buzzword with limited practical application.
The current state of GenAI is nascent, primarily enhancing existing processes rather than creating revolutionary changes. Enterprises are contemplating the investment required to integrate GenAI, questioning whether it will merely speed up current operations or genuinely transform business models. The challenge lies in identifying groundbreaking applications that justify the investment in GenAI.
There are compelling arguments for GenAI’s potential benefits within enterprises. For instance, it can automate unit tests, functional tests, and other repetitive tasks, saving significant time and resources. Additionally, GenAI can facilitate complex queries and data analysis, enabling businesses to extract more value from their data. However, these applications often appear as incremental improvements rather than revolutionary changes.
A significant concern for enterprises is the defensive posture they must adopt regarding AI. Employees across various departments are already using public AI frameworks, raising issues about data security and intellectual property. Companies are increasingly looking to develop internal AI models using proprietary data to mitigate these risks, although the timeline for achieving valuable outcomes remains uncertain.
One of the critical advantages of GenAI is its ability to understand and process natural language, which can simplify complex tasks like setting security policies or automating customer service interactions. This capability can reduce the need for extensive manual intervention, potentially decreasing errors and increasing efficiency.
However, the revolutionary impact of GenAI is still debatable. Many enterprises may not fully understand what GenAI entails, often driven by the hype rather than a clear strategy. While GenAI can offer significant improvements in specific areas, such as marketing automation or internal data analysis, these applications might not be as transformative as some might hope.
The integration of GenAI into existing enterprise systems, such as CRM platforms or security frameworks, may offer more immediate and tangible benefits. For example, using GenAI to enhance search capabilities within a company’s knowledge base or to automate routine tasks can provide value without requiring a complete overhaul of existing processes.
Despite the potential benefits, there is a need for enterprises to approach GenAI with a clear understanding of its capabilities and limitations. Investing in GenAI should be driven by well-defined use cases that align with the company’s strategic goals rather than by the desire to follow industry trends.
In conclusion, while GenAI holds promise for enhancing various aspects of enterprise operations, its ability to move the needle significantly is still uncertain. The technology’s true value will likely emerge over time as companies experiment with different applications and integrate GenAI into their broader digital transformation strategies. For now, enterprises should focus on realistic, incremental improvements while keeping an eye on the evolving landscape of GenAI.
It’s Time for Private 5G in the Enterprise
Jun 18, 2024
Wi-Fi has changed the way we work in the office but it’s not the only wireless technology. Challenging environments require new solutions like private 5G. In this episode, Tom Hollingsworth is joined by Mark Houtz and Shaun Neal as they discuss the rise of private LTE/5G technologies outside of the carrier space. They discuss the use cases where private 5G makes the most sense as well as why teams would choose to deploy this technology in the uncarpeted enterprise. The wrap up by suggesting questions that should be asked before embarking on a private 5G deployment.
To learn more about Networking Field Day Exclusive at Aruba Atmosphere, head to the event page on the Tech Field Day website.
Wi-Fi has changed the way we work in the office but it’s not the only wireless technology. Challenging environments require new solutions like private 5G. In this episode, Tom Hollingsworth is joined by Mark Houtz and Shaun Neal as they discuss the rise of private LTE/5G technologies outside of the carrier space. They discuss the use cases where private 5G makes the most sense as well as why teams would choose to deploy this technology in the uncarpeted enterprise.
Private LTE/5G came about because of the need for additional spectrum resources before the authorization of 6 GHz by the US FCC. After the release of that spectrum and its adoption into Wi-Fi 6E the technology formerly known as CBRS has grown to encompass a series of specialized use cases. Wi-Fi isn’t going away, despite recent papers saying it is more of a legacy technology. Private 5G has a much more specific kind of use case, such as the uncarpeted enterprise. This term describes offices that might have more rugged components, such as warehouses or manufacturing floors. The need to cover these areas sufficiently usually means higher costs using Wi-Fi access points. Private 5G excels here because of the higher power capabilities and better spectrum usage.
Private 5G is also a critical component for organizations with highly regulated network requirements, such as the US federal government. Because enterprises need to own their communications infrastructure end-to-end, they are unable to use carriers that provide 5G or LTE services. That means the only real option available to them is a private service. Because of the nature of private 5G small cell size this also means it is much better at providing service indoors, as opposed to the large scale cell sizes employed by carriers to provide a wider area of coverage for subscribers.
The episode closes with the big question that potential customers need to ask of their providers: what is your use case? If you don’t have a compelling use case for the technology or the reasoning behind deploying it is simply “the Wi-Fi doesn’t work” then you might find yourself disappointed by the performance as well as the additional costs. Private 5G is cost competitive with Wi-Fi but only because a single CBRS node can cover a much wider area than a Wi-Fi AP, albeit at a higher cost-per-node. If you have a need to cover outdoor areas or challenging RF environments you should definitely investigate deploying private 5G to your enterprise. It’s about time.
Cloud Native is Just a Marketing Term
Jun 11, 2024
Software developers used to use the term cloud native to describe applications that are designed for the cloud, but today it seems to be more of a term for containerized applications. This episode of the Tech Field Day podcast, recorded ahead of Cloud Field Day 20, includes Guy Currier, Jack Poller, Ziv Levy, and Stephen Foskett discussing the true meaning of cloud native today. Merely running a monolithic application in containers doesn’t make it cloud native, though it certainly can be beneficial. To be truly cloud native, an application has to be microservices based and scalable, and built to take advantage of modern application platforms and resources. There is some question whether a cloud native application needs to have API access, telemetry and observability, service management, network and storage integration, and security. But ultimately the words used to describe an application are less important than the value and benefits of it. Although it is disappointing that the definition of cloud native has been watered down, the core concepts still have value.
Software developers originally coined the term “cloud native” to describe applications specifically designed for cloud environments. However, over time, this term has evolved, or arguably devolved, into a buzzword often associated with containerized applications.
Initially, “cloud native” was a term to describe applications that were fundamentally designed to leverage cloud resources. These applications were built with the understanding that cloud resources could be ephemeral and not always persistently available. This necessitated a design philosophy that embraced the cloud’s self-service nature and inherent fragility. Fast forward to the present, and “cloud native” is often synonymous with containerization, particularly Kubernetes. This shift has led some to question whether the term retains its original meaning or has been diluted.
In the cybersecurity realm, “cloud native” is frequently used to distinguish applications developed specifically for the cloud from legacy applications that were adapted to run in cloud environments. This distinction is crucial for understanding the capabilities and limitations of a given application. However, the term’s overuse and varied interpretations can lead to confusion, as different vendors and stakeholders may have different definitions of what constitutes a cloud native application.
Within the infrastructure world, “cloud native” has become almost interchangeable with “Kubernetes.” This association is not without merit, as Kubernetes has become a cornerstone of modern cloud infrastructure. However, equating cloud native solely with containerization overlooks the broader architectural principles that truly define cloud native applications.
A critical aspect of cloud native applications is their design around microservices and scalability. Simply running a monolithic application in a container does not make it cloud native. True cloud native applications are built to take full advantage of modern application platforms and resources. This includes being microservices-based, scalable, and capable of leveraging the cloud’s dynamic nature.
There is a question as to whether cloud native applications need to have API access, telemetry, observability, service management, network and storage integration, and robust security. While these features are often associated with cloud native applications, they are not necessarily definitional. For instance, APIs are a standard way to interact with cloud native systems, allowing for automation and scalability. However, an application could theoretically be cloud native without relying heavily on APIs.
A similar question can be raised about telemetry and observability. These features provide critical insights into how an application is performing and behaving in a cloud environment, making them indispensable for managing cloud native applications effectively.
Security and networking are also crucial components. Cloud native applications must be designed with security in mind, leveraging various services and controls to ensure a secure stack. Networking, often overlooked, plays a vital role in ensuring the responsiveness and availability of cloud native applications.
Despite the term’s dilution, the core concepts of cloud native applications—scalability, performance, stability, and predictability—remain valuable. These principles enable the development of applications that can grow and adapt to meet changing demands, providing a significant advantage over traditional monolithic architectures.
Ultimately, while “cloud native” may be used as a marketing term, its true value lies in the benefits it delivers. As long as an application meets the goals of scalability, performance, and resilience, the specific terminology becomes less important. However, it is crucial for vendors and developers to back up their claims with tangible results, ensuring that the term “cloud native” retains its significance in the industry.
AI is at the top of the hype cycle and it feels unstoppable. Once upon a time blockchain was in the same place. In this episode, Tom Hollingsworth is joined by Evan Mintzer and Jody Lemoine. They tackle the surge behind AI development as well as the way the technology is portrayed in the industry. They compare how blockchain and AI are both solutions in search of a problem and how AI might better avoid the fate of DeFi. Also discussed is the potential way for AI-related companies to avoid issues with the AI bubble popping.
AI is at the top of the hype cycle and it feels unstoppable. Once upon a time blockchain was in the same place. In this episode, Tom Hollingsworth is joined by Evan Mintzer and Jody Lemoine. They tackle the surge behind AI development as well as the way the technology is portrayed in the industry. They compare how blockchain and AI are both solutions in search of a problem and how AI might better avoid the fate of DeFi. Also discussed is the potential way for AI-related companies to avoid issues with the AI bubble popping.
AI is being marketed as the solution for every problem you might have, known or unknown. While this does sound like the way to solve your difficult problems, it creates a problem in and of itself. Blockchain was also marketed as a solution to all your issues just a few years ago. In the IT space, OpenFlow was the same just a few years before that. Once the hype cycle died down and people realized these technologies were good for specific applications the market had already moved on to the next great magical solution. Users that had invested heavily in the previous hype cycle were left holding the bag.
AI is a too, just like any other. It’s not a panacea that fixes every issue. Adding it to a platform for dubious reasons or in the hope of creating additional value is short-sighted. AI won’t make everything it touches better without serious investment in time and effort to match the resources to the outputs. A company that is spending time to make sure AI has value to their customers will succeed. A company that is just tossing AI into their product to get additional funding is going to fail. This is doubly true for situations where adding AI to the solution creates more issues, such as data retention issues or security holes.
AI needs to be evaluated to be valuable to users. Don’t just blindly accept that AI is the one thing that we’ve been missing to have a utopia. Whether or not the billions of dollars we’ve invested in the infrastructure will pay off, remember that we’ve done things like this before. Given the track record of the average tech startup we will also be doing something like that again, and again.
Evan Mintzer is the Director of Production Infrastructure at Customers Bank. You can connect with Evan on LinkedIn or on X/Twitter and learn more on his website.
Application Modernization Requires Good Security Practices
May 28, 2024
As application development and modernization moves forward, security has never been more important. This episode of the Tech Field Day podcast introduces AppDev Field Day with a discussion of the importance of DevSecOps featuring Paul Nashawaty, Mitch Ashley, Michael Levan, and Stephen Foskett. Application security isn’t just about the vulnerabilities in the application itself; the entire software stack must be secure. There are many approaches, from vulnerability scanning to minimization of the attack surface, but the most important thing is to build security into software from the start. There are many parallels between physical infrastructure and software applications, with many of the same security considerations. Various components make up a software bill of materials (SBOM) and any of these can expose a vulnerability or be attacked. Platform engineering is an important connector between infrastructure and developers, and plays a major role in reducing the attack surface. It’s all about bringing expertise to the table to build supportable and secure platforms for modern applications.
Application development and modernization require a strong focus on security to ensure both new and heritage applications are protected. Applications can be categorized into heritage, modern (containerized and orchestrated), and future (potentially involving web assembly and serverless technology) states. Addressing security challenges such as skill gaps, refactoring decisions, and ecosystem integration is crucial.
The rise of open-source vulnerabilities has made application security increasingly overwhelming. It is essential to secure the entire software lifecycle, including the supply chain and toolchain used for development, testing, and deployment. A holistic approach to application security, beyond just APIs and vulnerabilities, is necessary.
While securing containers, pods, clusters, and VMs is important, the primary focus should be on code security. Vulnerabilities within the code cannot be compensated for by external security measures. Organizations must integrate security at the code level to ensure robust protection.
Reducing the attack surface is a key part of modernization efforts. Refactoring monolithic applications into microservices and containerizing them can help streamline applications and minimize their attack surface. Segregating common elements and business logic reduces exposure and enhances security.
The concept of “shift left” involves integrating security early in the development process rather than treating it as an afterthought. This approach ensures that security is built into the design and development process from the start, much like incorporating airbags into a car during manufacturing rather than adding them later.
Platform engineering plays a significant role in enhancing security. Platform engineers build environments for internal teams, including QA, security, IT, DevOps, and developers. This role requires a deep understanding of networking, infrastructure, virtualization, Kubernetes, software development, and security. It is a position suited for senior or principal-level engineers with extensive experience.
Platform engineering focuses on creating supportable, sustainable, and secure platforms for developers. Leveraging expertise to deliver secure and reliable systems aligns with broader IT goals.
Integrating security into every aspect of application development and modernization is crucial. Adopting DevSecOps principles, emphasizing platform engineering, and taking a holistic approach to security are essential for building secure applications. By prioritizing code security, reducing the attack surface, and leveraging expertise, organizations can enhance their security posture and ensure successful modernization efforts.
Platform Engineering Is the Revenge of IT Operations
May 21, 2024
The rise of platform engineering demonstrates how difficult it has been to balance DevOps between developers and operations. This episode of the Tech Field Day podcast features a discussion of platform engineering with Mitch Ashley and Mike Vizard of Techstrong Group and Stephen Foskett and Tom Hollingsworth of Gestalt IT. We explore the dynamic interplay between DevOps and platform engineering, two methodologies shaping the modern IT landscape. DevOps fosters a culture of collaboration, continuous integration, and rapid software deployment, promoting agility and innovation. However, the challenges of scalability and standardization within DevOps have given rise to platform engineering, which focuses on creating and managing standardized development and operational platforms. This approach aims to enhance DevOps by providing a stable foundation, allowing developers to concentrate on coding and innovation. We delve into how these seemingly divergent approaches are, in fact, complementary, balancing the need for speed and innovation with the demands of security, compliance, and scalability in the ever-evolving IT industry.
The collision between IT operations and software development reveals the challenges of achieving efficiency, agility, and innovation. The interplay between DevOps and platform engineering, two approaches integral to the modern IT ecosystem’s growth and transformation, is the heart of the current conflict. DevOps emerged to bridge the gap between development and operations, promoting collaboration, continuous integration, and rapid software delivery. But the challenge of scalability gave rise to platform engineering, a strategic response focused on standardization, improved security, and performance.
Platform engineering can be seen as the revenge of IT operations, addressing the chaos of disparate tools and methods across teams by providing a shared foundation of tools and processes. This standardization reduces friction, enhances security, and improves overall efficiency. Yet, it raises questions about flexibility and innovation. DevOps was initially embraced to move away from rigid, centralized IT practices, and platform engineering’s emphasis on consistency can feel like a step back. Developers worry about losing the freedom to experiment with new tools, potentially stifling innovation.
The friction between DevOps and platform engineering reflects a broader debate within IT about balancing innovation with control. While DevOps champions flexibility, platform engineering emphasizes structured workflows and scalability, aiming to alleviate the drudgery of managing infrastructure and tooling. This allows developers to focus more on coding and innovation. However, it is crucial to maintain a middle ground, where platform engineering does not become the “land of no” but rather supports the “department of yes” by allowing flexibility to adopt new methods and tools.
Platform engineering’s goal is to create a stable environment where developers can thrive without constant interruptions and context switching. By reducing cognitive load and allowing dedicated time for development, platform engineering aims to enhance productivity and creativity. However, this approach must avoid becoming a benevolent dictatorship, recognizing that the best tool for every task is not always possible. Platform engineering runs on consensus, providing a balance between structure and flexibility.
As companies evolve, they must navigate the tension between standardization and innovation, finding ways to integrate new tools without compromising security and performance. The rise of consumable versions of complex platforms, like Salesforce, exemplifies how standardization can coexist with customization, providing a language for how things should be done. Ultimately, platform engineering is about making thoughtful, consensus-driven decisions that support both the organization’s needs and developers’ creativity, ensuring that IT operations and development methodologies continue to advance in harmony.
Wi-Fi 7 Isn’t Enough For Future Wireless Needs
May 14, 2024
New technology standards can’t anticipate how users will consume content and applications and revisions to the standards will be adopted to meet their needs. In this episode, Tom Hollingsworth is joined by Ron Westfall, Drew Lentz, and Rocky Gregory as they discuss where Wi-Fi 7 falls short. Even though Wi-Fi 7 is a new standard it is still based on older thinking and users have changed the way they consume content and applications. This episode discusses the difference between cloud-hosted applications and local software as well as the drive to increase performance on edge access points to include faster response times to things like AI assistants.
When the development of Wi-Fi 7 started, the promise of 5G speeds was simply theoretical. We were still struggling with the rollout of LTE and given how problematic 3G was before it Wi-Fi just made sense. Fast forward to the modern wireless era and you find that 5G connectivity is not only more stable but, in many cases, much faster than the networks you can connect to a the local coffee shop. Add in the protection mechanisms inherent in cellular technology and it appears to be a significantly better user experience.
Users have also changed the way they do work. Before the pandemic the majority of work was done with applications that connected to internal resources at a company office. You needed private wireless connectivity to access important resources. The cloud was making changes but users still felt comfortable working at their desks. Five years later and most work is done through applications that connect to cloud resources. There isn’t as much of a need to go to the office, and that’s if you even still have one. Users don’t need fast enterprise connectivity. They just need to get to the cloud somehow.
The third major factor in the lack of performance for Wi-Fi 7 is the rise of more intensive applications. Modern AI development sees a significant push to have processing done centrally and algorithms being run on the edge. We need to have more powerful devices on the edge to take advantage of those capabilities, but the trend previously in development was to use more modest devices to meet power budgets for edge switches to deliver standard power capabilities. While power standards have increased to allow for more powerful capabilities the older style of thinking still persists.
This episode debates these topics as well as others to help you understand what the current state of Wi-Fi 7 is and how it will help you and your users with their connectivity needs.
Data Quality is More Important Than Ever in an AI World with Qlik
May 07, 2024
In our AI-dominated world, data quality is the key to building useful tools. This episode of the Tech Field Day podcast features Drew Clarke from Qlik discussing best practiced for integrating data sources with AI models with Joey D’Antoni, Gina Rosenthal, and Stephen Foskett before Qlik Connect in Orlando. Although there is a lot of hype about AI in industry, companies are realizing the risks of generative AI and large language models as well. Solid data practices in terms of data hygiene, proven data models, business intelligence, and flows can ensure that the output of an AI application is correct. The proliferation of Generative AI is also causing a rapid increase in the cost and environmental impact IT systems and this will impact the success of this technology. Good data practices can help, allowing a lighter and less expensive LLM to produce quality results. The Tech Field Day delegates will learn more about these topics at Qlik Connect in Orlando, and we will be recording and sharing content as well.
As AI technologies like generative AI and large language models continue to appear, the foundation upon which these technologies are built – data – becomes the linchpin of their success. This episode of the Tech Field Day podcast features a discussion with Drew Clarke from Qlik, alongside industry experts Joey D’Antoni, Gina Rosenthal, and Stephen Foskett. Ahead of Qlik Connect in Orlando, the panel discussed the best practices for integrating data sources with AI models, underlining the importance of data quality in an AI-dominated world.
The proliferation of AI technologies has brought with it an increased awareness of the potential risks associated with generative AI and LLMs. As companies venture into the realm of AI, the realization that not all AI is capable of delivering accurate or useful outcomes has become apparent. This acknowledgment has brought traditional data practices such as data hygiene, data quality, proven data models, business intelligence, and data flows into the spotlight. These practices ensure that the output of an AI application is correct and reliable.
One of the critical challenges is the integration of data into LLMs and small language models. We consider metadata, data security, and the implications of regulations like GDPR and the California Data Privacy Act on data integration with AI models. It is critical to consider data privacy and to avoid exposing private data as companies integrating data into their AI models.
We should also consider societal and environmental impacts of the rapid increase in the use of AI as well as the cost of inferencing. The environmental footprint of data centers, driven by the energy and water consumption required to support AI computations, is a particular area of concern. This underscores the need for good data practices that not only ensure the quality of AI outputs but also contribute to the sustainability of AI technologies.
Data is a key product in an AI world, and we must treat data with the same care and consideration as we do in conventional applications. This involves curating, managing, and continuously improving data to ensure its quality and relevance. Data engineers and business analysts play a key role in enhancing productivity and effectiveness of AI capabilities.
This discussion is a reminder of the critical importance of data quality in the age of AI. As companies navigate the complexities of integrating AI into their operations, the foundational principles of data hygiene, data quality, and proven data models remain as relevant as ever. We look forward to discussing these themes at Qlik Connect in June, and invite our audience to attend the event!
Containerization is Required to Modernize Applications at the Edge
Apr 30, 2024
Modern applications are widely deployed in the cloud, but they’re coming at the edge as well. This episode of the Tech Field Day podcast features Alastair Cooke and Paul Nashawaty from The Futurum Group, Erik Nordmark from ZEDEDA, and host Stephen Foskett discussing the intersection of application modernization and edge computing. As enterprises look to deploy more applications at the edge they are leveraging technologies like Kubernetes and containers to enable portability, scalability, resilience, and high availability. In many cases customers are moving existing web applications to the edge to improve performance and security, but not all webscale technologies are appropriate on the limited hardware, environmentals, and connectivity found at the edge. The question is whether to improve the edge compute platform or build resiliency into the application itself. But there are limits to this approach, since edge locations don’t have the elasticity of the cloud and many of the features of Kubernetes were not designed for limited resources. It comes down to developer expectations, since they are now accustomed to the experience of modern webscale platforms and expect this environment everywhere. In the future, we expect WASM, LLMs, and more to be used regardless of location.
The modernization of applications, from datacenter to cloud to edge, is rapidly progressing. Technologies drawn from the hyperscale world are finding their way to edge locations, where data processing and analysis occur closer to the source of data and customer transactions. This shift is driven by the need for real-time processing, reduced latency, and enhanced security, and technologies like Kubernetes and containers are increasingly used to facilitate this transition.
The Benefits of Containerized Applications at the Edge
Containerization offers many benefits essential for modern applications, especially those deployed at the edge. It provides a level of portability that allows applications to be easily moved and managed across different environments, from the cloud to the edge, without the need for extensive reconfiguration or adaptation. This is particularly important given the diverse and often resource-constrained nature of edge environments, which can vary greatly in terms of hardware, connectivity, and operational conditions.
Scalability is another critical aspect of containerization that aligns well with the needs of edge computing. Containers enable applications to be decomposed into microservices, allowing for more granular scaling and management. This microservices architecture facilitates the efficient use of resources, enabling applications to scale up or down based on demand, which is particularly useful in edge environments, again in the face of resource constraints.
Resilience and high availability are further enhanced through containerization. By deploying applications as a set of interdependent but isolated containers, developers can achieve a level of redundancy and fault tolerance that is difficult to achieve with monolithic architectures. This is crucial at the edge, where the risk of hardware failure, network disruptions, and other environmental factors can pose significant challenges to application availability and reliability.
The security benefits of containerization should not be overlooked in the context of edge computing either. Containers provide a level of isolation that helps mitigate the risk of cross-application interference and potential security breaches. This isolation is complemented by the ability to apply granular security policies at the container level, enhancing the overall security posture of edge deployments. And containerized applications are easier to keep up to date as security patches are developed.
Challenges for Modern Applications at the Edge
Despite these advantages, the deployment of containerized applications at the edge is not without its challenges. The resource limitations of edge environments, including constraints on compute power, storage, and network bandwidth, require careful consideration of the containerization strategy employed. Additionally, the management and orchestration of containers at the edge introduce complexity, particularly in highly distributed environments with potentially thousands of edge locations.
The choice between improving the edge compute platform to better support containerization and building resilience into the application itself is a critical decision. While enhancing the edge platform can provide a more robust foundation for containerized applications, it may require significant financial and technological investment. Although designing applications with inherent resilience and adaptability can offer a more immediate solution, these may not achieve all of the benefits of containerization.
The expectations of developers, accustomed to the rich features and flexibility of modern cloud-native platforms, also play a significant role in the adoption of containerization at the edge. Developers seek environments that offer the same level of agility, ease of use, and comprehensive tooling they are familiar with in the cloud, driving the demand for containerization technologies that can replicate this experience at the edge.
Looking forward, the evolution of containerization at the edge is likely to be influenced by emerging technologies such as WebAssembly (WASM) and large language models (LLMs). WASM promises to enhance the portability and efficiency of applications across diverse computing environments, including the edge, by enabling more lightweight and adaptable application architectures. The integration of AI and machine learning capabilities, particularly for processing and analyzing data at the edge, will further drive the modernization of applications in these distributed environments.
Containerization and the Edge
Containerization is a fundamental enabler for the modernization of applications in the cloud, and this is true at the edge as well. It offers the portability, scalability, resilience, and security necessary to address the unique challenges of edge computing, while also meeting the expectations of developers for a modern application development environment. As enterprises continue to push the boundaries of what is possible at the edge, containerization will play a pivotal role in shaping the future of edge computing, driving innovation and enabling new levels of performance and efficiency.
Security Audits Cause More Harm Than Good
Apr 23, 2024
Security audits are painful and often required for compliance but they aren’t adversarial unless you have a bad auditor or bad policy compliance. In this episode, Tom Hollingsworth sits down with Teren Bryson, Skye Fugate, and Ben Story to discuss the nuances of audits. The panel discusses the discovery of technical debt, external versus internal auditing, the need for flexibility in procedures. and how good auditors can make for a more positive outcomes.
Thorough audits will uncover issues with compliance as well as technical debt. This could include older devices that should have been replaced at the end of their life. It could also find code versions that are vulnerable to exploits and could lead to more issues. While operations teams don’t like being told things aren’t as they should be it’s better to know about those problems early before they get out of control.
It is also important to understand that there are different reasons to have an audit. The most common perception is that external organizations are auditing your enterprise to comply with their polices and procedures, such as a partnership or acquisition. However, internal audits carried out by third parties to verify compliance with your own polices are much more frequent. How can you ensure that you are doing what you say you’re doing if you don’t have someone else take a look at your polices to ensure they’re being followed? This is also the place where you find issues with user compliance, such as executives that believe the rules don’t apply to them.
A good auditor can make the difference in your audit experience. The best auditors are knowledgeable in the subject area and understand what is needed for compliance. They also ensure that you have time to remediate the issues. A bad auditor is one that only follows the strict procedures and doesn’t understand the nuance in auditing. They are often perceived as adversarial and cause IT teams to dread audits.
If you want to have a good audit experience you should keep two things in mind. The first is that you should assume that it will be a positive experience. The auditors are doing a job and they aren’t trying to hurt you or your company. The second thing to keep in mind is to answer the questions asked without volunteering information. You can innocently offer additional information to a question that leads to a negative experience because it forces the auditor to uncover things they weren’t originally tasked to find.
AI is Smarter Than Your Average Network Engineer
Apr 16, 2024
Recent advances in AI for IT have shown the huge potential for changing the way that we do work. However, AI can’t replace everyone in the workforce. In this episode, Tom Hollingsworth is joined by Rita Younger, Josh Warcop, and Rob Coote as they look at how the hype surrounding AI must inevitably be reconciled with the reality of real people doing work. They discuss the way that AI is judged for its mistakes versus a human as well as how marketing is pushing software as the solution to all our staffing ills.
AI will change the way that IT teams configure and manage their systems but it will take time for those teams to integrate into their current workflows and assure everyone it is a boon. This episode features Rita Younger, Joshua Warcop, and Rob Coote talking to Tom Hollingsworth about how much AI has already changed and how far it has to go in order to be a fully featured solution. They discuss not only the gaps in AI but the gaps that knowledge workers embody today.
While AI is drawing from a depth of knowledge that encompasses documentation and best practices, it is not a perfect solution today. There is nuance in the discussion that comes from years of experience in a given discipline. People learn from their mistakes and we expect them to make those mistakes in their growth as a worker. When AI makes mistakes we are immediately skeptical and racing to find a way to prevent it from ever happening again. Should AI be given the same grace that we extend to people?
Another part of this discussion highlights how AI is touted to replace so many things in IT yet we’ve heard this hype before and yet no one has been completely put out of work. Every technology eventually finds a niche to fill and performs to capabilities instead of the overinflated promise that was used to market it. As a technology matures, operations and design teams find the optimal way to use any solution instead of accepting the idea that it will replace everything.
The discussion wraps up with ideas from the panel about what questions you should be asking today when it comes to AI. The reality is that we will need to incorporate this technology into our workflows but we need to verify that it’s going to be a help as we train it to do the things we want it to do for us.
Cyber Resiliency is Just Data Protection
Apr 09, 2024
Cyber Resiliency is a term that encompasses much more than simply protecting data. This episode features Tom Hollingsworth joined by Krista Macomber and Max Mortillaro discussing the additional features in a cyber resiliency solution and the need to understand how data needs to be safeguarded from destruction or exploitation. The episode highlights the shift from reactive to proactive measures as well as the additional integrations that are needed between development, deployment, and operations teams to ensure success.
The panel talks about how backup and recovery have always been seen as reactive measure to disaster and not an integrated piece of a more proactive solution for other outage causes. Only when security incidents became more impactful and caused more data loss or theft did the need arise for more protective measures, such as entropy detection of data corruption or immutability of stored copies.
However, truly resilient solutions need more than just technical features. Other necessary pieces like policy-based enforcement of data retention and recovery objectives are crucial. So too is the need for security measures that prevent critical system processes from being exploited to achieve attacker goals. Operations teams must be involved in the entire process to keep users online with clean data while also allowing incident response teams to investigate and eliminate points of intrusion and data corruption and loss.
The episode wraps up with important questions that need to be answered when investigating solutions. Just because someone tells you that it’s resilient should you believe their claims. By asking good questions about the capabilities of the system in the investigation phase, you should find yourself with a usable system to prevent data loss and ensure business continuity in the future.
Credible Content From the Community is More Important than Ever
Apr 02, 2024
There is a hazardous amount of AI-generated and SEO-oriented content being generated, and the solution is real stories from real communities. In the first episode of Tech Field Podcast, recorded on-site at AI Field Day, Stephen Foskett chats with Frederic Van Haren, Gina Rosenthal and Colleen Coll about confronting inauthentic content. The internet is inundated with low-quality, AI-generated, and SEO-driven content, and the antidote is the cultivation of real, credible voices within the tech community. The discussion focuses on the importance of community-driven content and the credibility of individual voices in an era dominated by content optimized for algorithms rather than human engagement. The rise of generative AI in content creation and consumption is accelerating, and we must all find a balance between technological advancements and human insight. This is the essence of the Tech Field Day experience, which fosters meaningful dialogue among tech professionals and companies in the industry. For fifteen years Tech Field Day has highlighted the critical role of human connection and credible voices in navigating the digital information landscape, and this re-launched podcast is part of that continuing effort.
The entire internet is saturated with AI-generated content and SEO-driven articles, and tech media is no exception. Gestalt IT was founded to highlight the value of genuine community engagement and the power of independent voices. That’s why we started Tech Field Day in 2009, to give these independent technical experts a platform to learn, share, and explore enterprise IT. Today we are re-launching the Tech Field Day podcast to return to this foundational ethos but also rise to the challenge to provide credible, human-centered narratives in tech media.
The internet is the foundation for our industry, but this same technology threatens to undermine the fabric of genuine voices. The proliferation of content optimized for algorithms rather than humans has diluted the quality of information, leaving readers navigating a maze of inauthenticity and downright falsehood. That’s why community-driven content and the credibility of individual voices is so important. It’s a challenge that the Tech Field Day podcast is being re-launched to address head-on.
Tech Field Day is designed to be an environment where technology professionals and companies can engage in meaningful dialogue. This engagement is built on their authentic voices, and we have always tried to bring a diverse array of insights and experiences to the table. Like the event series, this podcast is designed to serve as a platform for these voices to cut through the noise of generative AI and SEO manipulation, offering perspectives rooted in real-world experiences and knowledge.
This first episode focuses on AI’s impact on society and the authenticity crisis in content creation. How can we build human connection when so much is automated? And yet we are not anti-AI: The challenge is how to leverage its capabilities while always ensuring that it complements rather than replaces our authentic voices. We are reminded of the importance of critical thinking and the value of community in navigating the flood of information. We need to find a balance between technological advancement and maintaining the integrity of human expression.
Inauthentic content can never match the feeling of a real discussion among passionate and knowledgeable experts. Tech Field Day is dedicated to fostering open discussion and we urge the tech community to rally around the principles of authenticity and credibility. SEO-optimized spam can never drown out real people as long as we keep questioning, discussing, and sharing. And we must embrace the changing social media landscape to help bring these conversations to the world through Tech Field Day events, this podcast, and our individual platforms.
Reintroducing the Tech Field Day Podcast
Mar 26, 2024
We are once again returning to the Tech Field Day name for our weekly podcast. In this episode, Stephen Foskett and Tom Hollingsworth delve into the history of the podcast, how it came to prominence and what sets it apart from other technical podcasts. We also discuss why each episode has a premise and why the name has been the On-Premise IT Podcast for so long.
Why Now?
We’re changing things up around here! Don’t worry, the only thing that is going to be different is the name of the podcast. We’re going back to the old name of Tech Field Day Podcast as a way to highlight what makes the podcast unique in the industry. Long time listeners of the show may remember it used to be the Gestalt IT Tech Field Day Roundtable over a decade ago.
Since then we’ve changed a lot about the format and content. Since 2017 we’ve been known as the On-Premise IT Podcast. It focuses on a specific topic, a premise if you will, each episode and features 3-4 Tech Field Day delegates as guests. We’ve posted 322 episodes in the past seven years talking about all aspects of enterprise technology both new and old. We’ve even focused on some non-tech issues like burnout and career growth. It’s all been for the betterment of the community at large as we bring you the opinions and perspectives of a group of experts in the enterprise IT space.
We wanted to make sure to highlight the relationship between Tech Field Day and the podcast as we move forward. The delegates at a Field Day event represent the critical voice of the practitioner and give the episodes a sense of grounded realism. This isn’t a marketing exercise or wishful thinking. These are the people that do the things and tell everyone what works and what doesn’t. They are the ones qualified to inform decision makers about the promise as well as the pitfalls.
Future episodes of our podcast will appear on the Tech Field Day site as well as through Spotify for Podcasters. We will still publish our new episodes every Tuesday so make sure you subscribe in your favorite podcatcher so you don’t miss a single premise that our wonderful delegates come up with each episode. Don’t forget to leave comments on the episodes so we know what you think. You can also leave a rating or a review in your podcatcher for others to discover the Tech Field Day Podcast.
AI Demands a New Storage Architecture with Hammerspace
Mar 19, 2024
Hammerspace unveiled a new storage architecture called Hyperscale NAS that addresses the needs of AI and GPU computing. This episode of the On-Premise IT podcast, sponsored by Hammerspace, is focused on the extreme requirements of high-performance multi-node computing. Eric Bassier of Hammerspace joins Chris Grundemann, Frederic Van Haren, and Stephen Foskett to consider the characteristics that define this new storage architecture. Hammerspace leverages parallel NFS and flexible file layout (FlexFiles) within the NFS protocol to deliver unprecedented scalability and performance. AI training requires scalability, performance, and low latency but also flexible and robust data management, which makes Hyperscale NAS extremely attractive. Now that the Linux kernel includes NFS v4.2, the Hammerspace Hyperscale NAS system works out of the box with standards-based clients rather than requiring a proprietary client. Hammerspace is currently deployed in massive hyperscale datacenters and is used in some of the largest AI training scenarios.
Combining Simplicity with Speed, with the New Hammerspace Hyperscale NAS Architecture
Data is the new currency of the modern economy. It has opened huge opportunities to drive trailblazing technologies like AI and machine learning deep into businesses and industries. But as storage systems lay jammed with volumes of unstructured data, legacy solutions are under threat. Data overabundance can easily overwhelm and disrupt these known storage solutions, leaving organizations at risk of being outperformed by their rivals.
This episode of On-Premise IT Podcast brought to you by Hammerspace, explores the reasons why the new data cycle requires next-generation storage systems. Eric Bassier, Sr Director of Solution Marketing for Hammerspace, talks about a new NAS architecture that can accommodate all the data that’s heading enterprises’ way, and do it at the speed require for AI training.
A Change Is in Order
“AI is forcing a reckoning in the industry that’s probably long overdue, to change how data is used and preserved,” comments Bassier.
Bassier puts storage systems into two main categories – the traditional scale-out network-attached storage (NAS), a technology already well-known and widely deployed in organizations, and the relatively new HPC parallel file systems designed exclusively for HPC environments.
“The fact that the HPC file systems have never been widely deployed in the enterprise speaks to a gap there. They don’t have the right feature set, and are too difficult to maintain,” says Bassier.
This is also telling of an uncomfortable truth about NAS systems. “The fact that HPC file systems still exist so predominantly in HPC environments is an admission that scale-out NAS architectures don’t meet their performance demands.”
What fundamentally separates HPC and AI workloads from traditional workloads is the need for speed and performance. GPU farms for AI training require to access data concurrently at high speeds.
A Disruptive Hyperscale NAS Architecture
Hammerspace has a new architecture, the Hyperscale NAS, that supports colossal data capacity and performance demands of GPU farms.
“[ the architecture] largely came out of our work with one of the world’s largest hyperscalers for their large language model training environment. It is a new storage architecture that as more and more enterprises get into AI and drive forward their initiatives, this would be the best storage architecture for large language model training, generative AI training, and other forms of deep learning,” says Bassier.
The unnamed client has a thousand-node Hammerspace storage cluster deployed in their LLM training environment where more than 30,000 GPUs are at work across 4000 server nodes.
“The Hammerspace storage cluster is feeding those GPUs at an aggregate performance of around 100 Terabits per second. It’s 80 to 90% of line rate,” he says.
Performance aside, the reason why the client chose Hyperscale NAS for the job is its standards-based design. Hyperscale NAS is standards-based, meaning it can operate on any commercial off-the-shelf storage server, be it NAS, object or block. One of the major benefits of that is, by just sitting on top of the storage, Hyperscale NAS can accelerate the underlying system without needing a costly upgrade.
“The underpinnings of this architecture have been in Hammerspace since day one.” Bassier points to the origin of the name “Hammerspace” to underline this. A hammerspace, he explains, is an extradimensional space invisible to the eye. Characters in movies and cartoons often use it to store unusually large objects, which they summon in times of need making it looks like they are conjured out of thin air. Think of Hermione Granger’s beaded handbag in Harry Potter, or Mary Poppins’ carpet bag.
Chris Grundemann comments, “Hyperscale NAS appears at first blush to be a representation of that. There’s no proprietary client software needed. It just works as a NAS but in a really new way, to support these crazy GPU workloads in AI.”
So, why did Hammerspace wait so long to introduce it? “We are bringing it to market now because of everything we’ve learned, where we’ve now proven this architecture at hyperscale,” says Bassier.
The paradigm is fast evolving. HPC and AI/ML workloads are going to be pervasive across organizations, and they will need a new NAS architecture that provides both the performance of HPC file systems with the right feature set, and the standards-based simplicity of Network File System (NFS).
Tying Together the Best of Both Solutions
In a scale-out NAS architecture, data has to make multiple network hops between the client and server. The more the hops, the higher the latency of transmission. The Hyperscale NAS architecture opens a direct data path between the two points, reducing the number of transmissions and retransmissions. The result is lower latency and faster throughput.
Metadata is dealt out-of-band. “We offload a lot of the metadata operations to a separate path so we can streamline it.”
Hyperscale NAS detaches data from metadata, putting them into two separate planes – the data plane and the control plane. The metadata resides inside the metadata service nodes which are essentially queryable databases.
This ties into another key aspect of the Hyperscale NAS architecture that Bassier highlights. Oftentimes, file systems are trapped in the storage layer that makes data opaque to the users. This is a barrier to collaboration works.
Hammerspace lifts the file system out of the storage layer and creates a global parallel file system with a single global namespace. Datasets are assimilated from multiple sources across sites and storage silos, and deposited into this file system. With global data orchestration, transparency is ensured for all users.
“Even users that are remote or not co-located with the data are all presented the same files that they’re authorized to see.”
Hyperscale NAS leverages NFS v4.2 client, particularly its two optional capabilities – parallel NFS and FlexFiles. “Hammerspace is the first one to take advantage of those capabilities,” says Bassier.
If Hyperscale NAS sounds a lot like an HPC parallel file system to you, then it is worth nothing that there are significant differences. Where others solutions rely on proprietary file system clients or agents that sit on GPU servers to give them the intelligence, Hammerspace doesn’t, and works with all standards-based clients, he concludes.
No One Wants To Be A Network Engineer Any More
Mar 12, 2024
The job market is more competitive than ever but the desire to fill network engineering roles is lower than before. In this episode, Tom Hollingsworth is joined by Ryan Lambert, Dakota Snow, and David Varnum for an examination of why network design and implementation isn’t a hot career path. They look at the rise of cloud as a discipline as well as the reduction of complexity in modern roles with help from software an automation shifts. They also discuss how entry level professionals can adjust their thinking to take advantage of open roles on the market.
Real World AI Looks a Lot Different From the Movies
Mar 05, 2024
Most people envision AI as a cool and orderly datacenter activity, but this technology will soon be everywhere. This episode of the On-Premise IT podcast contrasts the AI-based greenhouses of Nature Fresh Farms, as presented by guest Keith Bradley at AI Field Day, with the massive GPU-bound infrastructure many people imagine. Allyson Klein, Frederic Van Haren, and Stephen Foskett attended AI Field Day and were intrigued by the ways AI can process data from cameras and other sensors in a greenhouse environment.
The development of AI networking is moving forward and Ethernet is taking a prime role in how workloads will communicate. In this episode, Tom Hollingsworth is joined by Drew Conry-Murray and Jordan Martin as well as J Metz, the chair of the Ultra Ethernet Consortium, to discuss the progress being made by the UEC to develop Ethernet to meet the needs of AI. They discuss the roadmap for adoption of technologies as well as the drivers for the additions to the protocol and how people can get involved.
Generative AI is Developing Applications
Feb 20, 2024
Generative AI is becoming a key tool for software developers, and businesses are embracing it as well. This episode of the On-Premise IT podcast brings Paul Nashawaty of The Futurum Group, data expert Karen Lopez, and Stephen Foskett together to discuss how AI is impacting application development. Generative AI is incredibly compelling, rapidly producing credible output. that it’s hard to put a stop to it. Rather than trying to stand in the way, companies are looking for better quality tools, with data privacy and compliance capabilities to fend off the negatives that can arise from AI-generated content. AI can also help with tasks like documentation and testing that are less popular and more problematic, and these can improve overall code quality as well.
Modern workloads are overloading hardware systems, and the CPUs in the market today aren’t up to the task. In this episode of On-Premise IT Podcast recorded on the premises of the Cloud Field Day event in California, host Stephen Foskett is joined by Thomas LaRock, Shala Warner, and Jim Czuprynski from the IT world, to talk about innovation in hardware. The discussion addresses the burning question of whether investing in more specialized hardware will solve the problem. Hear the panel explain how hardware innovation is intertwined with software innovation, and how the two components come together to power cutting-edge workloads.
The IT world is obsessed with AI but the desire to put AI into every product creates confusion and uncertainty. In this episode of the On-Premise Podcast, Tom Hollingsworth is joined by Zoë Rose and Dominik Pickhardt to discuss why everyone is so excited about AI. They also focus on issues with opaque algorithms and how AI can actually be useful in helping professionals with their daily work.
Platform Engineering Isn’t Just DevOps Renamed
Jan 30, 2024
Platform engineering has been happening for a long time, but today’s implication is quite different. This episode of the On-Premise IT podcast brings platform engineering expert Michael Levan, industry analyst Steven Dickens, and host Stephen Foskett to consider what platform engineering is today. Building a platform for self service in the cloud has more in common with product development than the platforms delivered historically by IT infrastructure teams. One of the drivers for the DevOps trend was the divergence of IT development and operations over the last few decades, but this was different in the mainframe world. In many ways, today’s platform engineering teams are more mature process-wise thanks to the demands of multi-tenant cloud applications.
The term “platform engineering” has exploded in IT. Explainers and articles are rife about platform engineering’s boundless implications. Some are defining it as a niche battle, others are calling it the DevOps killer, and some are projecting it as a million-dollar career. Whatever it is, findings show that it is at the peak of the hype cycle, and is settling into a new standard.
The answer lies somewhere in the middle. The proclivity to slap new labels on old things is not new in marketing. The hype about platform engineering is somewhat the same. “We’ve been doing platform engineering for a really long time. It just has a name and a focus point now, but it’s not something that just popped out of nowhere,” says Levan.
Dickens likens it to the role of Mainframe developers. “The Mainframe guys speak in different tongues and worship different gods than the distributed and cloud guys, but if you took away the nomenclatures and actually looked at the job, it would be the same functional work.”
What’s the Hype about?
So why it being loved to death now? Because platform engineering does what software delivery processes benefit from most. It drives standardization and automation.
In a way, platform engineering is like the Hibachi experience. At a traditional Hibachi-style Japanese place, diners select their choice of noodles, meat, broth, sauce and toppings from the counter. At the bar, the chefs wield their knives, chopping, grilling, and cooking the ingredients into a hearty bowl of goodness.
Platform engineers do the same thing for the development environment. Platform engineering is the methodology to bring disparate components together into a platform in a way that makes sense, ultimately elevating the developer’s experience. In doing so, it alleviates the challenge of having to constantly worry about the platform.
The modern stack that engineers interface with can be broadly divided into three categories – the platform, the capabilities and the UI. The approach abstracts away complexity at all three levels, making sure that platform users can access the self-service features more easily. Sounds a bit like DevOps, right?
Not a New Name for DevOps
Platform engineering in the cloud era is a community position, not a technical one, says Levan. It encourages the infrastructure team to step into the developers’ shoes for the first time, and see things their way. “Platform engineering has two primary goals – go into systems thinking about customer service, and have a product mindset. When you combine those two things, your job is literally to help people,” explains Levan.
This is where its likeness with DevOps can be seen. In the 2000s, companies did platform engineering the traditional way – the platform engineers tuned the platform, the developers built the applications. There was no real interaction or exchange between the two workgroups.
But as years passed and new technology approaches came about, thought leaders saw that there is merit in bringing the two departments closer together. In this new culture, platform engineers and developers are to function transparently to improve application delivery. They deduced that overlapping software development with not only infrastructure, but also operations and product management will mature the processes, greatly contributing to the organizational growth and success.
“Platform engineering is all about quality engineering. One of the big reasons why I became self-employed a couple of years ago was because I didn’t want to throw a duct tape in my environments anymore. I’m just really happy that the entire tech community is seeing the same thing now,” says Levan.
What is shaping the rising popularity of platform engineering is its maturity. At the core, today it is about creating order in chaos. Amid infinite workflows, tools and technologies, platform engineering fosters a consistent and standard environment that affords developers a predictable experience, and boosts productivity and efficiency, not only by freeing them to do their work, but by also eliminating errors and guesswork that frequently cause bottlenecks and delayed release cycles.
“Focusing on nonfunctional requirements, putting quality code into production and infrastructure mattering again is really key,” says Dickens.
Wrapping Up
As companies rethink their approach to software development, platform engineering shines the spotlight on ways CTOs can close gaps and build bridges between separate teams, and solve bigger problems and eventually achieve shorter time to market.
For more, be sure to give the podcast – Platform Engineering Isn’t Just DevOps Renamed – a listen.
Cloud Repatriation is Really Happening
Jan 23, 2024
Now that businesses have deployed modern applications in the cloud they are starting to ask whether it might be more attractive to run these on-premises. This episode of the On-Premise IT podcast features Jason Benedicic, Camberley Bates, and Ian Sanderson discussing the pros and cons of cloud repatriation with Stephen Foskett. A recent blog post by 37 Signals got the Tech Field Day delegates talking about the reality of running modern applications in enterprise-owned clouds, whether in the datacenter or co-located. Certainly the hardware and software are available to move applications on-prem, and some workloads may be better served this way. Most of the necessary components to run modern web applications are available on-prem, from Kubernetes to Postgres to Kafka, but these can prove difficult to manage, which is one of the things as-a-service customers are paying for. Looking back to the debut of OpenStack, enterprises have wanted to run applications in-house but they found it too difficult to manage. OpenShift is much more attractive thanks to the support and integration of the platform, but many customers have financial and administrative reasons for as-a-service deployment. It might not be a mass exodus, but there are plenty of examples of repatriation of modern applications.
Why Companies Are Moving Off of the Public Cloud
A new trend coming out of the enterprise IT industry is cloud repatriation. The chatter picked up when 37signals, a SaaS project management company, publicly announced that it saved $1 million by pulling apps away from public cloud. According to CTO, David Heinemeier Hansson, repatriation has shrunken the company’s cloud spend by 60%, and is projected to save an estimated $10 million over the next five years.
And theirs’ is not an isolated case. Skyrocketing costs of data and storage in the cloud have caused a lot of companies to pull away and migrate back to on premise datacenters in the last few years. Seagate has built its own platform to deploy web applications that runs in their private datacenter on owned hardware. More recently, LinkedIn has called off plans to move workloads from on-site to Azure Cloud.
So are companies really abandoning their cloud computing dreams and hauling wares back to where they started? At the recent On-Premise IT Podcast, host Stephen Foskett addressed this question that’s lately been the talk of Silicon Valley.
Public Cloud Offerings Come at a Premium
When considering relocating technology, the reasonings fall into two main buckets – cost and control. “As we went into 2024, a lot of very large enterprises are concerned about costs. So there is this ongoing effort for cost management, and what is happening is a recalculation or reevaluation of where the workloads are to be placed and why. That workload rationalization has been going on for some time,” notes Camberley Bates, VP Practice Lead at The Futurum Group.
Enterprises’ rationale behind migrating to cloud was to reduce OpEx. The cloud offered an attractive answer to the surging cost problem in on-premise datacenters. The promise however soured as companies started to struggle with cost blowouts. Despite adapting their operating principles and practices to rein in the spendings, an optimized cloud value has remained unrealized.
After expending a notably large amount of time and resources to get to cloud, when a company decides to withdraw, it reflects as poor planning. Much like in all financial decisions, the sunk-cost fallacy creeps in. And to keep the cloud obsession going, hyperscalers hook users in with free credits that give them a free pass to start down the road.
From Managed Cloud to a Private Infrastructure
Spurred by dependency fear and cost and ownership concerns, many big enterprises have started bringing selected applications on-site as part of their workload placement strategy.
“A few years ago, it was a cloud-first mentality which we’re moving away from today with the hybrid approach, but it’s a very interesting marketplace in terms of options of where you can repatriate to in terms of the software stack,” says Ian Sanderson, Product Manager.
One of the things that makes the argument of going back on-premises seem valid today is the evolution of datacenter computing. “Since the cloud came about, we’ve seen a lot of step change in on-premise compute. We have gone from average systems of 4 cores to up to systems with 64 cores. So you could pack a lot of compute into a small space at a small cost,” points out Jason Benedicic, independent consultant.
A growing technology ecosystem is making shifting applications possible. “There’s a lot more off-the-shelf products for running clouds. Kubernetes and containers have come a long way. So the skill ramp-up needed to build and run your own modern application stack is lessened – I don’t think it’s completely removed, it’s not as easy as virtualization is – but there’s a lower barrier to entry. There’s a cheaper, more dense hardware aspect and those come together to make repatriation a possibility,” he adds.
The Uncomfortable Truth
Although technological advances lend users freedom to place their workload anywhere that offers maximum cost, performance and security payoffs, lifting and shifting too has its trade-offs. The cloud has monopoly over a few things that enterprises can’t pass up on, especially with the wide-spread adoption of AI. For one, on-premise infrastructures can barely match the agility, speed and iteration of public cloud.
“If you run a startup business with a couple of DevOps engineers and a fairly small team, it is going to be a daunting proposition to run all of it yourself. It’s possible, but the question is, what are the hidden costs and where do they lie,” cautions Benedicic.
But, increasingly, data costs in the cloud are driving companies to rethink the strategy. “Talking about the issue of cost analysis, we’ve seen a decline in the cost of server instances. We have not seen that same kind of cost basis on the data side,” notes Bates.
An Emerging Ecosystem
Thankfully, modern containerized applications have some amount of portability built in. “With serverless stuff, there’s some level of interoperability but there are not a huge number of serverless platforms out there that are mainstream,” Benedicic says.
Companies like Red Hat and IBM have solutions that make quick work of installing on-prem environments. The rise of OpenShift has been game changing in the way people think about running private cloud. Red Hat OpenShift is an open-source container application platform. The on-premise PaaS flavor is self-managed and comes with on-prem support for maximum ease.
Red Hat is one of the companies that is building a full suite of tools that work together to make the transition easier. Things like deployment blueprints that serve as guides are extremely helpful to get users started.
Wrapping Up
Workload repatriation need not be a binary decision. In many big enterprises, cloud repatriation may have taken off, but it is not a quit-the-cloud movement that it has been made out to be. Amid an economic downturn, companies are trying to tighten their budget and deciding where a workload best resides is the cornerstone of that. A hybrid placement approach will ensure a more natural distribution of workloads across cloud and datacenters than we have seen before.
For more, be sure to check out the On-Premise IT Podcast episode – Cloud Repatriation Is Really Happening – to follow the discussion.
Ethernet Won’t Replace InfiniBand for AI Networking in 2024
Jan 16, 2024
InfiniBand is the king of AI networking today. Ethernet is making a big leap to take some of that market share but it’s not going to dethrone the incumbent any time soon. In this episode, join Jody Lemoine, David Peñaloza, and Chris Grundemann along with Tom Hollingsworth as they debate the merits of using Ethernet in place of InfiniBand. They discuss the paradigm shift as well as the suitability of the protocols to the workloads as well as how Ultra Ethernet is similar to another shift in converged protocols – Fibre Channel over Ethernet.
AI is going to accelerate development of malware everywhere from code to prompts for social engineering. But tools can be used for defense as well as offense. In this episode of the On-Premise IT Podcast, Tom Hollingsworth is joined by Girard Kavalines, Ziv Levy, and Matt Tyrer as they debate the impact that AI will have on malware development in 2024 and beyond. Hear how AI can drive automation on both sides of the security spectrum as well as how we can better prepare to face an onslaught of assisted attackers.
Your IT Security Policy Needs to Be Followed
Jan 02, 2024
IT security policies are aspirational goals because they have so many exceptions. The difference between being hacked and being safe could come down to one employee. In this episode, Tom Hollingsworth sits down with Jasper Bongertz and Brian Knudtson to talk about how security polices are inherently fragile and can cause people to have more faith in them than they should. Also discussed is how people are not always the problem in these situations and how companies can do a better job of crafting documents that reflect real-world applications of protection.
Users are always going to blame the connectivity medium for issues and we just have to accept it. In this episode, Sam Clements, Troy Martin, and Darrell DeRosia join Tom Hollingsworth to discuss why users are adamant that the wireless is the problem when it’s always something else. They discuss why IT professionals should focus less on blame shifting and more on creating an environment that provides resolution even if it’s not their problem. The episode wraps up with suggestions for professionals to create an environment better suited to meeting user expectations.
Automation is a very complicated subject that requires a lot of thought and planning before implementation. It’s not something that every organization needs to implement. In this episode, Tim Bertino, Jake Khuon, and Jordan Villarreal discuss the challenges inherent in automation of networks and systems. They also clarify the differences between scripting, orchestration, and real automation. In the end they give tips and questions to ask when you feel like it is time to start your journey toward automation.
WebAssembly Will Displace Containers For Web-Scale Applications
Dec 05, 2023
Containerization of applications is only a small step forward from virtualization, but WebAssembly promises a real revolution. This episode of the On-Premise IT podcast, recorded live at KubeCon 2023 in Chicago, features Nigel Poulton, Ned Bellavance, Justin Warren, and Stephen Foskett discussing the prospects for WebAssembly. WebAssembly (WASM) is lauded for its potential to be faster, smaller, and more secure than its predecessors. But skepticism surrounds its long-term adoption and development trajectory, with debates centering on whether WASM can achieve the transformative status that containers once held. While WASM applications are technically more portable, smaller, and quicker to start, adoption remains at an early stage, appealing more to developers than operations professionals.
Identity Management is Tweaking our Neuroses
Nov 28, 2023
The concept of identity management has become increasingly complex and challenging due to the purely digital nature of modern identity. This episode of the On-Premise IT podcast, recorded on-premises at ISS in Cleveland, features Bob Kalka of IBM, Leon Adato of Kentik, and Stephen Foskett discussing the various ways identity management tweaks our neuroses. As organizations grapple with this issue, they face the daunting task of merging elements such as identity, passkeys, passwords, and AI in a way that is seamless and less nerve-wracking.
Identity Management is Tweaking our Neuroses
The relevance of identity and self goes beyond what we normally comprehend in our routine lives. Within virtual spaces, identity management becomes even more important because it is disconnected from our physical experience. From the lens of the cybersecurity team, taking control of identity management presents unique challenges as they neither completely own the problem nor the solution. Legacy systems and constantly shifting tools create further hurdles in the management of identity and access permissions, often making it appear more like security theatre.
However, the necessity to ward off unauthorized access necessitates an efficient system of identity management. Many are suggesting that AI can break through these challenges. In particular, AI aids in the detection of identity-related threats by analyzing behavioral patterns. As a result, it helps to deal with the attendant neuroses related to identity management found in many digital users.
The concept of identity management is increasingly being viewed as a fabric of relationships rather than a singular goal. The identity fabric acknowledges the realities of a hybrid world where managing identity across multiple identity providers and directories is an essential function. Passwordless authentication, a user-friendly concept leveraging passkeys, has recently emerged as a popular solution. But the process of identity management does not halt with user access. The backend still necessitates managing identities in multiple locations. Responding to this need, the concept of identity orchestration has emerged as a novel approach in managing identity in varied environments. Taking ownership and addressing these challenges proactively is an essential step in effectively managing identity.
The complexity of digital identity management underlines the necessity for organizations to accept the reality of these management hurdles. Leveraging advancements such as AI to shed light on the issue in a more comprehensive manner is a critical stride towards finding more effective solutions. Recognizing and understanding the shortcomings of these systems is vital in this journey, just as it is crucial to appreciate the potential of AI in steering breakthroughs in the digital identity management space.
As ransomware continues to pose a significant threat to enterprises, C-level executives must collaborate and communicate with IT. This episode of the On-Premise IT podcast, brought to you by Commvault and recorded live in New York at their Shift event, features Thomas Bryant of Commvault along with Gina Rosenthal, Eric Wright, and Stephen Foskett. The discussion focused on the crucial need to bridge departmental gaps so IT and executive management can work together. The panel also emphasized the need for openness about risks, lessons from past attacks and the role of government mandates.
Cybersecurity is a C-Suite Problem
Ransomware is increasingly becoming a significant issue demanding the attention of C-level executives. As discussed on the On-Premise IT podcast recorded live at Commvault Shift, tackling cyber risks is not just a technical challenge but also one of leadership and governance. Commvault’s focus on cyber resilience brought to light the crucial role of CEO, CIO, and board involvement in addressing such threats. With government mandates pushing for greater accountability within organizations, the responsibility for cybersecurity now extends far beyond the domain of IT and security teams.
A key aspect highlighted throughout the discussion was the importance of collaboration between different departments within an organization such as security, networking, and IT operations. This cross-functional collaboration is seen as a critical factor in addressing cyber threats effectively. To facilitate such collaboration, the democratization of information flow is needed. Breaking down departmental silos and fostering free-flowing communication can aid in the rapid identification, reporting, and addressing of potential threats.
The shifting paradigm within cybersecurity, from solely focusing on prevention to minimizing damage in case of an attack, also came under discussion. The panel recognized that with the evolution and complexity of cyber threats, a strategy focusing only on prevention may be inadequate in protecting assets. This understanding calls for the regular testing and practicing of incident response plans, critical for building “muscle memory” and reducing potential downtime and lost revenue from cyber attacks.
In discussing the SolarWinds case, where the CISO faced criminal prosecution after a cyberattack, the podcast panel underscored the importance of transparency and honesty in cybersecurity. This precedent sets an example for other organizations, emphasizing that surviving an attack should not be a source of shame, but instead, should be an opportunity to learn and enhance security measures. The critical role of government regulations, resulting fines, and the potential incentivization of good cybersecurity practices through monetary means were also discussed as drivers of better cybersecurity practices.
Finally, the panel discussed Commvault’s new platform, marrying expert knowledge with proactive measures such as ransomware assessments to aid organizations in enhancing their cybersecurity practices. This approach signifies the industry’s movement away from its “wizards” culture, acknowledging vulnerabilities, and working towards admitting and addressing challenges. In essence, the cybersecurity landscape is one of shared responsibility, transparency, proactive measures, government backing, and technological advancements.
Changing or upgrading hardware and software is a scary proposition on the best of days. In this episode, join Tom Hollingsworth along with Keith Parsons, Mike Bolitho, and Lee Badman as they talk about moving from one vendor to another. There is a lot of planning that goes into the decision to upgrade or replace something. It’s even more frightening when you’re removing one vendor’s equipment for another. Learn what to look for and how to make the transition as easy as possible.
Lee Badman is a longtime freelance writer and analyst as well as contributed to and written a number of wireless study guides. You can connect with Lee on LinkedIn or on X/Twitter and find his writings on his blog.
AI Won’t Fix Your Data Security Problems
Nov 07, 2023
Data security is a complicated subject and AI is not a magic solution to fix all of the problems you will face with it. In this episode, Tom Hollingsworth is joined by Richard Kenyan, Matt Tyrer, and Chris Hayner as they discuss how AI has changed the landscape of security. They discuss the challenges of finding the right AI models to replicate how your systems look and behave as well as where there are blind spots with respect to user behaviors. They all talk about the ways that attackers are starting to adjust their tactics to beat systems that can’t anticipate where the attacks will be coming from next.
Chris Hayner is a results-oriented IT professional with expertise in cloud, strategy, modernization, and IT security. You can connect with Chris on LinkedIn and on X/Twitter
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Edge is the Third Great Tech Revolution
Oct 31, 2023
Tech is a field full of revolutions and Edge is something special. In this episode of the On-Premise IT Podcast, Jim Czuprynski, Gina Rosenthal, and Brian Knudtson give their take on whether or not Edge is a fundamental shift in the status quo or mearly an evolution of other paradigm shifts like Cloud. The panel focuses on the way that Edge strategies are affecting the way we consume content and deploy applications as well as the impacts that Edge has on areas outside of technology.
Software licensing is making networking much more complex and causing networking professionals to be very confused about the state of their discipline. In this episode, Tom Hollingsworth is joined by Nick Buraglio and Lindsay Hill as they discuss the way that software defined networking (SDN) has changed the feature set of network hardware. Also discussed is the shift in focus to developer assets as well as how to recognize revenue from incremental feature additions as well as deployment of resources to appropriate functions in a network software development company.
Mind the Gap Between Hyperscale and Enterprise IT
Oct 17, 2023
Hyperscale-inspired technology is everywhere in enterprise IT, from Kubernetes to S3 to OCP, but these technologies may not be applicable. This episode of On-Premise IT features Cloud Field Day 18 delegates Allyson Klein, Eric Wright, and Nathan Bennett discussing the cloud gap with Stephen Foskett. Looking at AI, we see a very different deployment model in hyperscale cloud as opposed to enterprise cloud, with this gap in technology, implementation, and talent widening. One impact of the needs of hyperscalers is an increased focus on sustainability, specifically energy consumption. We should also consider how the hyperscale use case distorts the development of technology, which is obvious in CXL, GPUs, and networking technologies. Looking at Cloud Field Day, we see that many of these companies are attempting to bridge this gap, connecting hyperscale cloud technology to the enterprise. This is what makes the event so interesting!
Backing up data at the edge is fraught with challenges concerning the importance of the data and the limitations of the hardware at your disposal. In this episode of the On-Premise IT Podcast, Jody Lemoine, Ben Young, and Bart Heungens discuss how edge backup differs from traditional enterprise disaster recovery. They highlight the need to identify data retention requirements for edge systems as well as the pitfalls of using cloud solutions versus local options for disconnected devices or lackluster connectivity situations. The discussion wraps up with questions that operations teams should be asking to get in front of these challenges before disaster strikes.
Edge Innovation is Coming from All Directions
Oct 03, 2023
As we’ve discussed all season on Utilizing Edge, innovation is coming from all directions, including hardware, software, and applications. This special crossover episode of the On-Premise IT and Utilizing Tech podcasts features Edge Field Day delegates Brian Knudtson, Ned Bellavance, and Jody Lemoine discussing their perspectives about edge innovation with Stephen Foskett. The primary drivers at the edge are integration, efficiency, and connectivity, as well as the unique needs of the applications there. Starting with hardware, customers are headed in two directions, with more enterprise availability features deployed in some locations and less-capable hardware in others, both in terms of compute and networking. At the software level, most edge infrastructure is hyper-converged, meaning that multiple layers of the stack are integrated in software and managed as one. Although intended as an application platform, Kubernetes is being deployed as a packaging abstraction and distribution solution at the edge.
Although modern-day storage products let us do more with less, and is more capable than ever before, they are also way more complex, and often unintelligible to the masses. Recorded at the recent Storage Field Day event, in this On-Premise IT podcast, Stephen Foskett asks the attending luminaries from the storage industry to define storage in simple terms. With innovation piling high, storage, in the recent years, has slipped away from the grasps of IT professionals, causing add-on stress and pressure. Storage, as a disciple, has grown so vast, that it is only possible to either be a generalist and acquire a broad understanding, or a specialist with narrow focus in one thing. Listen to the discussion to learn how storage professionals working at the heart of IT view and interpret storage.
Primary Storage is Becoming Secondary Storage
Sep 19, 2023
The storage industry is increasingly focused on memory rather than traditional storage, and this reflects an architectural shift in the compute stack. This episode of On-Premise IT focuses on the new storage stack, which now includes memory, with Andy Banta, Jim Jones, Vuong Pham, and Stephen Foskett, all of whom are attending Storage Field Day 26 and SNIA’s Storage Developer Conference. The difference between memory and storage was historically based on the technology at hand, but these lines are blurring. The latest systems can address storage and memory in very similar ways, and can apply advanced data management techniques to memory as well as storage. NVMe, NAND flash, CXL, and persistent memory technologies are blurring the lines, and the latest developments in software, as highlighted at SNIA’s SDC, bring new capabilities. As memory becomes more like storage, what was once primary storage has a new job to perform further down in the hierarchy focused on data management, ransomware, and data protection.
Memory is Edging Out Primary Storage
As reflected in the Storage Field Day presentations and discussions, as well as topics at this year’s SNIA Storage Developer Conference, primary storage is increasingly adopting the traits of secondary storage. This is caused by advancements in memory technology which step into the shoes of primary storage, and a focus on secondary services from primary vendors.
Prior to Storage Field Day, Andy Banta, Jim Jones, Vuong Pham, and Stephen Foskett discussed this new storage methodology. The discussion centered around persistent memory and CXL, memory systems, memory layering, and the emerging difficulties with memory tiering. We all agreed that primary storage looks more and more like secondary, while memory near the CPU is gaining traction as “primary memory!”
Of course memory isn’t storage, so we have to consider data persistence. Despite the need for speedier data access, we will also always need persistence of stored data, and this must be more than traditional archiving. We must also consider cache coherence, since these systems will have multiple cache levels beyond L1, L2, and L3.
CXL (Compute Express Link) provides a path forward, since it will deliver system expansion and cache coherence, even though apprehensions about coherency and processor contention (components of the forthcoming CXL 3.0) have surfaced. Despite these concerns, the protocol’s widespread adoption demonstrates its momentum and potential for radical IT infrastructure transformation.
The radical rearrangement of memory and storage, the mounting significance of tiered memory, and the incorporation of breakthrough tools like CXL signal a paradigm shift in understanding primary and secondary storage. Shaping the future of the IT stack, they are ushering in a new era that both challenges and excites the industry.
AI Infrastructure Disrupts Enterprise IT with Justin Emerson from Pure Storage
Sep 12, 2023
As enterprises try to deploy infrastructure to support AI applications they generally discover that the demands of this application can disrupt their architecture plans. This episode of On-Premise IT, sponsored by Pure Storage, discusses the disruptive impact of AI on the enterprise with Justin Emerson, Allyson Klein, Keith Townsend, and Stephen Foskett. Heavy duty AI processing requires specialized hardware that more resembles High-Performance Computing (HPC) than conventional enterprise IT architecture. But as more enterprise applications leverage accelerators like GPUs and DPUs, and become more disaggregated, AI starts to make more sense. Power is one key consideration, since companies are more aware of sustainability and are impacted by limited power availability in the datacenter, and efficient external storage can be a real benefit here. This is still general-purpose infrastructure but it increasingly incorporates accelerators to improve power efficiency. One issue for general purpose infrastructure is the concern over security, and enterprise AI applications will certainly benefit from broad access to a variety of enterprise data. Enterprise use of AI will require a new data infrastructure that supports the demands of AI applications but also enables data sharing and integration with AI applications.
Once Again, Snapshots Are Not Backups
Sep 05, 2023
Snapshots are still not backups but the nature of data storage means care must be taken to determine data retention requirements. In this episode of the On-Premise IT Podcast, Brian Knudtson, Matt Tyrer, and Richard Kenyan discuss the nature of data protection and recovery point objectives. The rapid pace of cloud storage growth as well as the shift toward using microservices and serverless computing have added additional challenges to the definition of data that needs to be backed up. In addition, the terminology changes with major companies in the space have created confusion in features.
Firewalls Need to Evolve with Fortinet
Aug 29, 2023
Security is an ever-changing technology that requires constant vigilance. Traditional models face challenges in a modern world where bad actors have better tools and methods for avoiding detection. If security teams want to stay on top of threats today they need to be sure they’re using the latest solutions to address these challenges.
In this episode, sponsored by Fortinet, Chris Grundemann, Michael Levan, and special guest Nirav Shah discuss the need for updated solutions and how Fortinet is changing the game with their Hybrid Mesh Firewall platform.
VMware Should Focus on the Hypervisor and Networking
Aug 22, 2023
As we head into VMware Explore US 2023, we are forced to consider the company’s strategy once again. Wouldn’t it be better if VMware focused on the hypervisor and networking rather than continually exploring new products and markets? That’s the question posed by Stephen Foskett to Allyson Klein, Andy Banta, and Matt Tyrer in this episode of the On-Premise IT podcast. Focus isn’t a bad strategy, especially given the slow pace of development for cloud-native applications in the enterprise. And VMware’s involvement in edge computing is an enticing new market for their core technologies. But not everyone is convinced that this is the right move!
In this episode of the On-Premise IT podcast, Stephen Foskett poses the question of focus and strategy for VMware. Allyson Klein, Andy Banta, and Matt Tyrer acknowledged the significance of VMware in the enterprise and the value of their core products, with Banta pointing out the value of moving a running Virtual Machine (VM) from one place to another, memory over-provisioning, and its capability in effectively utilizing multi-core processors. The discussion emphasizes VMware’s relevance for enterprises preferring to run their applications in a modern data center.
However, the panel expressed skepticism about VMware’s decision to diversify its focus. VMware initially gained traction by encouraging collaboration among networking, storage, and processor vendors, providing an integral platform that unified these varied technologies. But its decision to compete with these firms by branching out beyond its core competencies has raised concerns among panelists.
The panelists’ apprehension was also influenced by emerging technology trends, particularly the rise of edge development and container use. The panelists saw VMware as playing a potentially critical role in edge environments, given its established ties with enterprise IT. They anticipated VMware’s strategies for hypervisor control in multi-cloud to multi-edge settings responding to interoperability challenges.
The upcoming VMware Explore event in Las Vegas is highly anticipated by the panelists, as they look forward to insights about the company’s vision and imminent innovations. Especially significant is the fact that this event comes as the company nears takeover by Broadcom. VMware’s steps in leveraging enterprise data center applications beyond data center boundaries, suggesting its involvement in edge compute and storage, will be closely watched.
Low Code and No Code Aren’t The Magic Solution
Aug 15, 2023
Low Code and No Code automation solutions have been gaining significant popularity. Organizations are embracing them to kickstart or continue automation projects. But are they the right fit for every company? There are considerations to discuss and sizing issues that need to be addressed. You also need to understand the potential impact of having a team dedicated to a solution and not a methodology.
In this episode, Carl Fugate, John Osmon, and Girard Kavelines discuss where Low Code and No Code make sense and where you should consider using a different tool.
Startups are Tech Trailblazers for the Giants
Aug 08, 2023
As tech giants struggle to adapt to changing business conditions, startups are quick to blaze new trails. This episode of On-Premise IT, hosted by Stephen Foskett and featuring Rose Ross, Tim Crawford, and Justin Warren, compares the tech world to an environment where small, innovative startups are advancing while larger, traditional companies fight to keep their market dominance. Companies can only flourish if they are able to understand the evolving needs of the industry, while startups can close the gap in markets they overlook. When it comes to IT products, CIOs are no longer the primary customer, necessitating a better understanding of who the target audience is and how to communicate effectively with them.
In the fast-paced world of information technology, where the landscape is constantly evolving, small startups have emerged as the tech trailblazers, driving progress and revolutionizing the industry. The significance of these small ventures making their way in the tech world is the focus of The On-Premise IT Podcast hosted by Stephen Foskett. Joined by Rose Ross, Justin Warren, and Tim Crawford, the panel delves into the challenges faced by established companies in adapting to change and the opportunities that arise for startups to innovate.
Large, traditional companies often struggle to adapt to the rapid evolution of the tech industry, leaving them ill-prepared to meet the changing needs of consumers. On the other hand, startups thrive by identifying gaps in the market and overlooked opportunities that incumbents might have missed. This ability to innovate and offer solutions tailored to emerging customer expectations becomes the driving force behind the success of small startups.
One critical aspect that differentiates startups from established companies is their approach to understanding the customer. While larger companies may struggle to decipher the true needs of their customers, startups have the advantage of being closer to their target audience, enabling them to create products that directly address specific pain points. However, the panel notes that as startups innovate, their target audience may shift, necessitating a refined understanding of who to target and how to communicate effectively.
Startups tend to be trailblazers, envisioning and creating solutions for future problems rather than focusing solely on current issues. This forward-looking approach allows them to carve out unique spaces in the market. However, it is crucial for startups to ensure that the solutions they develop genuinely add value and fulfill the customer’s needs. Startups must avoid the trap of solving problems that may not require immediate solutions or deliver tangible benefits to their customers.
The shift towards software-dominated solutions and the leverage of open-source and cloud components have significantly reduced entry barriers for startups. Software development has become more accessible compared to hardware development, allowing startups to explore innovative ideas without significant upfront costs. Additionally, the industry’s focus on interoperability and flexibility opens doors for new companies to challenge established processes and bring fresh perspectives to existing solutions.
While startups have an advantage in being customer-centric, the panel raises concerns about the current investment model’s impact on technology innovation. Venture capitalists may prioritize investor interests over customer needs, potentially hindering genuine innovation. The Tech Trailblazers Awards recognize companies that innovate and bring value to customers, irrespective of VC funding, encouraging startups to focus on solving real-world problems.
The panel speculates on the possibility of increased collaboration among startups, where smaller ventures unite to create something more substantial collectively. This shift could redefine the perception that joint ventures are exclusively for large companies, creating new avenues for small startups to scale their impact.
Small startups have become the driving force behind the transformation and evolution of the technology industry. Their ability to innovate, understand their customers, and adapt to changing market trends sets them apart from established incumbents. As the industry continues to evolve, startups have the unique opportunity to carve out their niche and lead the way for established companies to follow suit. The podcast sheds light on the importance of nurturing the startup ecosystem, reevaluating investment models, and embracing collaborative efforts to pave the way for a thriving tech industry driven by innovation and customer-centricity.
One of the most attractive use cases for VMware was disaster recovery, and the availability of cloud infrastructure enhances this use case. That’s the topic of this episode of the On-Premise IT podcast, sponsored by Pure Storage. Cody Hosterman, Calvin Hendryx-Parker, Ned Bellavance, and Stephen Foskett discuss the use case for disaster recovery in the cloud. Although each cloud platform has different virtual machine capabilities, most now have a storage platform like Pure Storage Cloud Block Store, which is available in AWS and Azure. In addition to the pay-as-you-go aspect, the consistency of operations when using a compatible storage solution between the datacenter and the cloud is extremely compelling. Once data is in the cloud it opens the door to cloud-native applications and modernizing the enterprise as a whole.
Network-as-a-Service is the End of Network Engineering Roles
Jul 25, 2023
Network-as-a-Service is a new concept. Or is it? The ideas behind having someone else working on your network infrastructure are as old as the industry. With the advent of public cloud becoming the dominant form of IT consumption the industry is taking a second look at NaaS. In this podcast episode Drew Conry-Murray, Mohammad Ali, and Pat Allen discuss the definition of Network-as-a-Service, the various different kinds of adoption models that NaaS can take, and how enterprises small and large can find benefits.
The importance of testing cannot be understated. Network testing is more than just working bandwidth or certifying network components. Applications play an important role in determining how networks should operate. So too does the role of components integrated with each other at the unit and system level.
The sponsor of this episode, Keysight, brings a wealth of knowledge to the discussion borne from years of experience in the networking testing space. Learn how Keysight uses that knowledge and experience to the modern world of high speed Ethernet and the needs of companies deploying it at scale.
Modern information security teams have a need for visibility to ensure user safety. Traffic flows and patterns are analyzed for anomalies and polices are put in place to ensure everyone is secure. However, protecting the data that you’ve collected is an even bigger task. Organizations need to ensure users have their identities and patterns hidden away from those that might use them for nefarious purposes. In this episode of the On-Premise IT Podcast, Roy Chua, Karen Lopez, and Alex Neihaus join Tom Hollingsworth to debate the need for organizations to secure their enterprise but also keep user data private and why the gap between the two aims is easy to lose sight of while making business policy.
In this age of software, cloud, and platforms, custom hardware seems to be a lost art. In this episode of the On-Premise IT podcast, Joep Piscaer, Max Mortillaro, Steve McDowell, and Stephen Foskett consider whether hardware really matters anymore. Given that developers generally prioritize higher-level abstractions and platforms, they often ignoring the importance of hardware. But we can argue that optimizing hardware is crucial for efficiency, sustainability, and cost-effectiveness. The importance of hardware is dependent on the specific use case or context. And developers should be educated about the impact of their hardware choices. The possibility of regulations being introduced to enforce efficiency and optimize hardware usage should also be considered.
The panelists debated whether hardware is truly relevant in today’s IT landscape. It was observed that developers often focus on higher-level abstractions and platforms, caring less about the underlying hardware. However, some panelists emphasized the importance of hardware optimization, highlighting its impact on efficiency, sustainability, and cost-effectiveness. The role of hardware was seen as varying depending on the specific use case or context. While developers may not prioritize hardware, it was argued that they should be educated about its significance to make more informed decisions and consider aspects like energy efficiency.
Looking towards the future, the panelists speculated that hardware optimization could become increasingly important, especially for product vendors operating in an abstracted software-defined world. As software becomes more efficient and abstracted from hardware, there is an opportunity for vendors to differentiate themselves by offering optimized hardware solutions. They also discussed the possibility of regulations being implemented to enforce efficiency and encourage hardware optimization. This could be accomplished through measures such as introducing taxes or incentives based on hardware usage.
In conclusion, the panelists agreed that while developers may not prioritize hardware, its relevance is still important for efficiency, sustainability, and cost-effectiveness. The need for hardware optimization may vary depending on the use case or context. However, there was consensus on the importance of educating developers about the impact of their hardware choices. The conversation also touched on the possibility of regulations being introduced to enforce efficiency and optimize hardware usage. For further discussion on this topic, the panelists shared their contact information, making it easier for listeners to engage with them.
Silos Are Sabotaging Your Security Strategy
Jun 27, 2023
IT is full of siloes. They help ensure that experts are working on the areas they are best suited for. However, siloes are a problem for security teams. When you need information and visibility the walls insulating your other teams become a barrier. How can we address this in the security space? And what does the CIO need to know to make everyone more effective? In this episode, join Alex Neihaus, Karen Lopez, and Bruno Wollmann as we explore the impact that siloes have on our security strategy.
This episode of the On-Premise IT Podcast focuses on the challenges posed by the siloed nature of enterprise IT departments, which often hinder effective security practices. This fragmentation within organizations makes it particularly difficult to implement cohesive security measures that cover all aspects of an enterprise’s infrastructure and systems.
While enterprise IT departments are often divided into separate teams, attackers do not limit their efforts to specific silos. They exploit vulnerabilities across the entire system, necessitating holistic security measures. Recognizing this, organizations must strive to break down silos and develop cross-silo solutions to effectively protect against cyber threats.
In the realm of data security, internal threats are just as significant as external ones. Malicious actors within an organization can cause significant harm to data integrity and confidentiality. Hence, it is crucial to address internal security risks alongside external threats. This requires collaboration and cooperation between different teams, which can be challenging due to conflicting priorities and differing perspectives.
The implementation of cross-silo security solutions can sometimes lead to disagreements between teams. IT and security teams may have different approaches, preferences, or priorities, causing friction and delays in the decision-making process. However, when security and IT teams share common goals, trust can be built, leading to increased collaboration and more effective security strategies.
While security policies are essential for safeguarding organizations, they can be poorly implemented in technology, resulting in tension between IT and security teams. In some cases, security measures can impede the smooth operation of systems or restrict the flexibility required by IT teams. Striking a balance between robust security and operational efficiency is crucial for ensuring the overall success of an organization’s security efforts.
To achieve optimal security, it is necessary to maintain awareness of security issues. However, information overload can sometimes lead to a lack of understanding of the underlying technology. It is important to strike a balance between staying informed about security threats and vulnerabilities while ensuring that IT professionals possess a deep understanding of the technologies they work with. This helps bridge the gap between security and IT teams and facilitates effective collaboration in implementing security measures.
To address the challenges posed by siloed IT departments and enhance security, organizations should consider adopting a more balanced approach. This entails breaking down silos through education, support, and increased visibility into business needs. Additionally, job descriptions within IT departments should evolve to reflect the importance of cross-functional expertise, encouraging the cultivation of generalists who possess knowledge in networking, database management, and application development. By fostering collaboration and eliminating silos, organizations can achieve a more robust and comprehensive security posture that aligns with business objectives.
Constant Rebranding is Ruining Your Sales Cycle
Jun 20, 2023
You may be intimately familiar with brands and their products for some lines but not everything is iconic. Companies rebrand products all the time in order to get rid of per performers or try to increase sales. In the world of enterprise IT does that mean it’s more difficult to figure out what you’ve bought? Or how best to support it? In this episode of the On-Premise IT Podcast, Zoe Rose, Eric Steward, and Josh Warcop discuss the challenges faced in constant rebranding and how your efforts to put on a fresh face may lead to hard feelings.
In this episode of the On-Premise IT Podcast, the panel discusses the negative impact of constant rebranding in the IT industry. They highlight how the confusion caused by changing product names and lack of documentation creates challenges for users and customers. The panelists share their experiences with Cisco’s security products and the difficulty of identifying and understanding the specific functionalities of each product. They also discuss the frustration of dealing with outdated domains and the need to constantly relearn and adapt to new product names.
The conversation then shifts to the importance of market education and how the constant rebranding hinders the sales cycle. The panelists express concerns about not having enough time to learn about the products they are working with or supporting, which affects their ability to provide effective sales solutions. They emphasize the need for clear documentation and product comparisons to help users make informed purchasing decisions.
The panelists acknowledge the challenges of maintaining consistent branding and the potential confusion it can create. They mention examples like Cisco Catalyst switches, which can refer to access points or routers, and Apple’s macOS versions, which have undergone multiple name changes. While consistency can be beneficial, it can also lead to problems when expectations are not met. They discuss the balance between leveraging existing brand reputation and avoiding negative associations or becoming synonymous with subpar products.
The panel also explores the concept of genericized trademarks, where a brand name becomes synonymous with a specific product category. They highlight Palo Alto’s success in becoming synonymous with application-level firewalls and how this can both benefit and harm a company’s reputation. Finally, they discuss the challenge of leveraging acquired brands like Meraki to reshape perception and whether it’s a positive move for a company like Cisco. They debate the trade-offs between simplicity and the loss of complexity, emphasizing the importance of understanding the target market’s needs and skill levels.
Overall, the panel agrees that constant rebranding can have a detrimental effect on sales cycles and customer perception. They stress the significance of clear documentation, market education, and finding a balance between consistency and adaptability in branding strategies.
Machine Learning is Best Suited for Security
Jun 13, 2023
Although artificial intelligence, specifically machine learning and large language models, is in the news, it isn’t very useful in enterprise IT. In this episode of the On-Premise IT podcast, Karen Lopez, W. Curtis Preston, Michael Levan, and Stephen Foskett discuss the use case for AI in security. The panel acknowledges that machine learning can be beneficial in identifying anomalies and patterns that humans may overlook. It can assist in generating policies, templates, and rule sets, as well as providing best practices based on aggregated data. However, they also express concerns about the responsible use of AI and the need for training models on specific environments to ensure effectiveness. They highlight the importance of having the right data sets and the challenges of dealing with the black box nature of machine learning. Despite potential exploits and limitations, they agree that AI is currently the best tool available for detecting and addressing security threats, such as data exfiltration and unauthorized access.
In this episode of the On-Premise IT podcast the discussion focuses on the use of AI in enterprise security. They emphasize the potential benefits of AI, particularly machine learning and large language models, in identifying anomalies and patterns that might go unnoticed by human analysts. By leveraging AI, organizations can generate policies, templates, and rule sets that enhance security measures. Furthermore, AI can provide valuable insights and best practices based on aggregated data, assisting security teams in making informed decisions and strengthening their defenses.
Despite these advantages, the panel also raises concerns about the responsible use of AI in security. They emphasize the necessity of training models on specific environments and datasets to ensure the accuracy and effectiveness of AI systems. Without proper training, AI algorithms might produce false positives or negatives, leading to inadequate security measures or unnecessary alarm. This highlights the importance of utilizing relevant and high-quality data sets to achieve optimal results.
Another challenge discussed in the podcast is the “black box” nature of machine learning models. While AI algorithms can detect and flag suspicious activities, it can be challenging for human operators to comprehend and interpret the reasoning behind those decisions. The lack of transparency poses difficulties in understanding the rationale of AI systems, potentially impeding the ability to trust and effectively utilize them for security purposes.
In spite of these challenges, the panel unanimously agrees that AI, at present, is the most powerful tool available for detecting and addressing security threats. It can effectively identify data exfiltration attempts, unauthorized access, and other malicious activities. The panel members emphasize the importance of continuously refining and enhancing AI models to adapt to evolving threats and changing attack techniques.
Overall, the discussion offers a balanced view of the use of AI in security. While acknowledging the potential advantages of AI in augmenting human capabilities, the panel highlights the need for responsible implementation, proper training, and ongoing refinement of AI systems. By leveraging the power of AI and combining it with human expertise, organizations can bolster their security defenses and effectively combat sophisticated threats.
Can a remote predictive wireless survey achieve the same results as something on-site? Can the current generation of modeling give you the assurances that your design is going to work? And does the coming future of AI-driven development offer any additional capabilities that we don’t see today? In this episode of the On-Premise IT Podcast, Kerry Kulp, Peter Mackenzie, and Mohammad Ali discuss the trends of predictive surveys and how we can improve overall design satisfaction.
Cloud Workload Repatriation is a Real Problem
May 30, 2023
Enterprise IT is constantly oscillating between centralized and distributed, and we’re currently in a period of repatriation of workloads from the cloud. This episode of the On-Premise IT podcast features three delegates from Cloud Field Day 17, Joey D’Andoni, Eric Wright, and Jason Benecicic, discussing the reality of repatriation of cloud applications with Stephen Foskett of Gestalt IT. Pundits constantly tout the money they save by repatriating from public cloud, but this might not be the best choice especially for smaller organizations. The only way to ensure functionality between on-prem, hybrid, and public cloud is to use them and use each where it is the best solution. Repatriation is especially challenging for today’s SaaS-oriented businesses, since most of these solutions can’t be run on-prem. But even workloads that can be run outside the cloud will likely require re-architecting to run locally. Yet many companies are developing software to ease the transition to and from the cloud, and these make it much easier to repatriate.
Finding the Cloud’s Sweet Spot: Navigating Workload Challenges and Unleashing the Power of Hybrid Solutions
Enterprise technology has witnessed a pendulum swing between centralized and distributed models, with the cloud representing a distributed approach. However, challenges persist in determining the suitability of cloud for different workloads, considering factors like performance and cost profiles. This has prompted a reevaluation of the cloud’s effectiveness for certain applications.
Early cloud adopters often faced the realization that their expectations of cost savings and architectural understanding were not always met. This led to a shift back to on-premises environments. However, technological advancements have reignited the assessment of the cloud’s true cost-effectiveness, highlighting the ongoing challenge of predictability that software solutions have yet to fully address.
The initial allure of cloud services was driven by the expertise and offerings of major providers like AWS and Microsoft, which catered to specific purposes and modern workloads that organizations couldn’t handle internally. Over time, hardware advancements and distributed skill sets made running workloads in the cloud more manageable. However, challenges arise when evaluating the benefits of on-premises solutions, particularly for technology-focused organizations with substantial investments and engineering staff.
The polarized narrative of being exclusively “all-in” or “all-out” of the cloud fails to recognize the value of hybrid models and the nuanced decision-making required for different workloads and architectures. It is crucial to move beyond trends and focus on evaluating applications and workloads based on their specific needs, using sensible and proven approaches.
The cloud offers flexibility, burst capacity, and programmable workload deployment, making it suitable for certain tasks. Instead of abandoning the cloud solely based on cost considerations, organizations can adopt a hybrid approach, strategically choosing which workloads are best suited for on-premises environments and which benefit from the cloud’s capabilities.
To optimize cloud adoption, organizations must continuously evaluate their strategies, leveraging tools and evaluating cost models to determine which workloads may be better suited for on-premises environments. Challenges related to storage performance, costs, and capturing accurate metrics must be addressed to ensure cost-efficiency and predictable outcomes.
The perception of the cloud as “broken” often stems from human decisions and reluctance to deviate from established practices. Embracing advancements in self-service, automated provisioning, and comprehensive analytics can help organizations overcome these challenges and fully leverage the benefits of the cloud.
As organizations continuously revisit their cloud strategies, the focus should remain on evaluating specific workload needs, leveraging the strengths of the cloud, and incorporating practical solutions. The industry must prioritize teaching the new generation of technologists the nuances of selecting the right tools for the job, ensuring that cloud adoption decisions align with business objectives and generate significant returns.
Finding the right balance between on-premises and cloud-based solutions remains a critical task for enterprises. It requires a nuanced approach, considering workload requirements, cost-effectiveness, performance, and scalability. By embracing hybrid models, leveraging cloud strengths, and making informed decisions, organizations can navigate the complexities of cloud adoption and unlock the full potential of modern enterprise technology.
QLC SSDs Are Ready for Mainstream with Solidigm
May 23, 2023
As NAND flash memory technology has evolved, MLC, TLC, and QLC has been perceived to compromise both reliability and performance. In this episode of the On-Premise IT podcast, we confront the reality of Quad-Level Cell (QLC) SSDs, shedding light on their capabilities and suitability for today’s workloads. Roger Corell of Solidigm, who sponsored this episode, discuss QLC SSD with Karen Lopez, Alastair Cooke, and Stephen Foskett. The industry is increasingly embracing the benefits of QLC SSDs for mainstream workloads, and this discussion debunks common misconceptions, emphasizing the equivalent reliability, performance, and quality of QLC, TLC, and MLC SSDs. With the shifting landscape of read-intensive workloads and the growing data demands, QLC SSDs offer an efficient and cost-effective solution for mainstream applications.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
The Rise of QLC SSDs for Mainstream Workloads
As technology evolves, there are often trade-offs between reliability, performance, and capacity. This has been especially true in the world of NAND flash memory, where multi-level cell (MLC) technology has long been perceived to compromise both reliability and performance. However, in this episode of the On-Premise IT podcast, sponsored by Solidigm, industry experts confront the reality of Quad-Level Cell (QLC) SSDs, shedding light on their capabilities and suitability for today’s workloads. Roger Corell of Solidigm is joined by Karen Lopez, Alastair Cooke, and Stephen Foskett, in a discussion of the advantages of QLC SSDs.
In recent years, workloads have shifted towards being more read-intensive, aligning perfectly with the capabilities of QLC flash technology. Solidigm’s fourth-generation QLC SSD drives are a testament to this evolution, delivering impressive capacities of up to 32 TB, a wide range of high-density form factors, and cost-effectiveness tailored for mainstream workloads. These SSDs have even made their way into new form factors like EDSFF E3, offering improved cooling and density for applications beyond traditional data centers and the cloud.
One of the persistent misconceptions surrounding QLC SSDs is the belief that increased bit per cell density significantly impacts read performance. However, the latest generation of QLC products provide equivalent read performance to TLC SSDs, effectively debunking this perception. Users can expect reliable and fast read speeds, enabling efficient data access and retrieval.
Another concern often raised is the potential compromise of write performance in QLC SSDs. However, Solidigm’s latest QLC drives offer write performance that is within 20-67% of certain TLC SSDs, making them highly suitable for mainstream and read-intensive workloads. Furthermore, these drives boast impressive endurance with 3,000 program/erase (PE) cycles, ensuring their reliability and longevity even under demanding usage scenarios.
Contrary to the belief that QLC compromises reliability and performance, Solidigm emphasizes that QLC, TLC, and MLC SSDs all offer equivalent quality and reliability in terms of drive reliability and data integrity. By optimizing technology for specific use cases, any expected differences can be effectively alleviated, providing peace of mind for organizations adopting QLC SSDs.
The need to efficiently store and access vast amounts of data is on the rise. With read-intensive workloads becoming more prevalent and the ever-increasing size of AI models and HD movies, QLC SSDs prove to be an ideal solution for mainstream and read-intensive workloads. These drives can handle the demands of data-intensive applications, providing the required speed and capacity while maintaining cost-effectiveness.
The discussion also highlights the value and extended lifespan of solid-state storage. Retired systems often have significant life left in them, demonstrating that the limited write cycles of NAND flash isn’t usually a restriction. It is likely that QLC SSDs will similarly outlast the servers in which they are used.
As the perception-reality gap narrows, the industry is increasingly embracing the benefits of QLC SSDs for mainstream workloads. The panel’s insights debunk common misconceptions, emphasizing the equivalent reliability, performance, and quality of QLC SSDs. With the shifting landscape of read-intensive workloads and the growing data demands, QLC SSDs offer an efficient and cost-effective solution for organizations across various sectors, empowering them to unlock the full potential of their data-driven initiatives.
We Gave Away Too Much To Get Wi-Fi 6E
May 16, 2023
The industry is excited to implement Wi-Fi 6E with all the new devices coming out. Even with regulatory challenges the world is ready for faster connectivity and more reliable signal. But those same regulatory challenges are just part of the myriad of issues. Standards bodies, marketing teams, and even users themselves are asking why it’s taking so long to implement Wi-Fi 6E even after it has been brought to market faster than any Wi-Fi standard in the past. Is that because we gave up too many things to get it here? In this episode, Tom Hollingsworth talks to Sam Clements, Avril Salter, and Mario Gingras to find out whether Wi-Fi 6E got here so fast because we left so much of it behind.
Don’t Just Store Your Data, Make It Useful with Hammerspace Orchestration
May 09, 2023
There is a wide gap between storing data and making it useful, and it is getting worse with the growing volume of unstructured data. In this episode of On-Premise IT Podcast presented to you by Hammerspace, delegates Justin Warren and Chris Evans get together with Hammerspace’s Head of Global Marketing, Molly Presley, to drill into the pains of managing unstructured data, and learn how Hammperspace addresses them with data orchestration. Data orchestration takes the one-dimensional approach of storing data, to the next level that is cleaning, organizing, enriching and making data accessible across systems. It makes it possible to move large volumes of data across distances. Not bound by any one data or infrastructure type, data orchestration helps businesses handle new kinds of complex data and keep up with their changing uses.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Key Points
Data orchestration involves the movement and utilization of data regardless of its location or format. The challenges posed by unstructured data and the rise of edge computing have emphasized the need for efficient data orchestration solutions. Over the past two decades, the transition has shifted from structured to unstructured data, requiring complex workflows and interconnections between different data types. Unstructured data, such as genomics data, microscopy data, and multimedia data, necessitates effective data orchestration for proper management and utilization.
Technology advancements have enabled the decoupling of metadata from individual storage systems, allowing for distributed orchestration and flexibility in leveraging unstructured data. The focus is now on leveraging technology to enable desired actions with data, rather than being constrained by traditional tools. The shift towards NoSQL databases and data lakes reflects the need to make both structured and unstructured data useful and break free from tool limitations. Efficient data orchestration enhances workflows by facilitating data transformations, metadata application, collaboration, archival, retrieval, and interaction, while allowing for flexibility in storage systems and applications.
Data orchestration removes friction and simplifies access to data by eliminating the need to remember specific storage systems or locations. It empowers users to repurpose or modify data efficiently. By separating the storage system from data management, a more flexible approach is achieved, where storage focuses on security, performance, and accessibility, while data policies and actions can be layered on top for effective organization and utilization. Skilled professionals, often referred to as data architects or librarians, play a vital role in managing and organizing information across different storage systems.
The longevity of data is an important consideration, as it often outlasts storage systems and applications. Automation plays a crucial role in ensuring data remains accessible even when the original creators are no longer present. Curation, similar to the role of librarians curating books, is essential for data management. Data curators ensure data quality, facilitate migration between mediums, and appropriately dispose of unnecessary or sensitive information. Frictionless access to data is critical, as its value diminishes when it cannot be easily accessed and utilized. Data management and data orchestration are interconnected, with orchestration facilitating the movement and presentation of data to applications and users while adhering to front-end and back-end policies. The complexity and volume of data necessitate a robust orchestration model to unlock its true value, and advancements in AI engines and commercial products are emerging to meet the evolving needs of data businesses.
Private 5G Can’t Be Simplified For Enterprises
May 02, 2023
Private 5G is coming to the enterprise near you. Deploying this hot new wireless and mobility technology is a great way to overcome challenges with Wi-Fi and allow IoT devices to be provisioned quickly in remote locations. But with new technologies come new complexity. Can Wi-Fi engineers figure out how to make these two solutions co-exist? And does the complexity of 5G radio technology mean we have to choose between one or the other? In this episode, Avril Salter, Troy Martin, and Keith Parsons discuss the complexity of Private 5G and whether or not it can be simplified for the enterprise.
Although the Utilizing Edge podcast dives deep into the topic of edge computing, it’s worth considering the topic from other perspectives. This episode of the On-Premise IT podcast features Andy Banta, Jim Jones, and Gina Rosenthal discussing the reality of edge with Stephen Foskett, host of Utilizing Edge. After a quick consideration of the various devices that could be considered edge, from mobile phones to 5G base stations to software to Intel NUCs, we discuss the requirements and demands of edge computing. But there are many similarities with datacenter and cloud, including virtual machines and containers, Kubernetes and firewalls, hyper-converged infrastructure and SD-WAN.
The term “lock in” gets thrown around a lot in IT. It’s the reason why we spend so much time engineering solutions that don’t make us dependent on technologies. But does it really matter in the long run? Is the treatment for lock in really better in the end? Does it matter if it’s in the enterprise or in the cloud? And are we finding ourselves locked in slowly or all at once? In this episode of the On-Premise IT Podcast, Jody Lemoine, Snehal Patel, and Steve Puluka join Tom Hollingsworth to discuss the pros and cons of lock in and how it might not be as bad as you’re imagining.
Every piece of software has an API now. If you want to interact with the program you’re going to need to write a program of your own. Are you ready to learn? Should companies still be expected to have a CLI available for the non-programmers? And does the ease of accessing a published API create more problems than it solves? In this episode of the On-Premise IT Podcast, Tom Hollingsworth, Jody Lemoine, and Steve Puluka discuss the advantages and disadvantages of having easy access to everything at your fingertips via the API.
Databases and Storage Systems Are Converging
Apr 04, 2023
Storage systems and databases are becoming increasingly alike. In this episode of On-Premise IT Podcast recorded at the recent Storage Field Day event in California, delegates – Glenn Dekhayser, Jim Czuprynski and Denny Cherry, join host Stephen Foskett, to chew over this. In what ways are storage platforms and databases alike? Are they only different in the way they store data, or is the distinction purely based on the IT personas using them? Watch the panel explore the ins and outs of modern arrays and databases, and dissect into why, despite their likeness, the two exist side by side. Learn how databases and storage platforms treat data differently. Hear them list down the properties that separate the two.
Enterprises Need Security Specialists to Succeed with Fortinet
Mar 28, 2023
The prevailing cybersecurity skills shortage has impacted security teams around the world and their ability to protect against today’s threats. As decision makers analyze the potential remedies to this issue, how should they proceed? Is automation the answer? Should you be investing in cybersecurity training programs to advance your employees’ skills and expertise? Should you look at bringing in third party experts to help you close the gap? Or is it a combination of all three?
In this episode, brought to you by Fortinet, Melissa Palmer, Chris Grundemann, and Karin Shopen of Fortinet dive into the state of the modern enterprise and cybersecurity skills, and the solutions that can help security teams combat the talent shortage in the short and long term.
Fortinet Panelist:
Karin Shopen, Vice President of Cybersecurity Solutions and Services at Fortinet. You can connect with Karin on LinkedIn. Learn more about how Fortinet’s SOC-as-a-Service and Cybersecurity Assessments and Readiness Services can help combat the challenges of the skills gap and better protect against cyber threats. In addition, find out how the Fortinet Training Institute can help security professionals advance their skills and help organizations build the cyber workforce of tomorrow.
Ever since IBM introduced external storage in the mainframe age the concept of big storage keeps changing. This special episode of the On-Premise IT podcast includes Storage Field Day 25 delegates Ray Lucchesi, Andy Banta, and Rohan Puri considering the current state of enterprise storage, from the datacenter to the cloud. Although every device and computer has storage of one sort or another, the market still has a tremendous appetite for dedicated storage hardware, software, and services. Storage for different use cases can be wildly different as well, from IoT and edge to machine learning and analytics to databases to the cloud, and each of these have different solutions. But it’s all big storage, and big storage will continue to grow and diversify.
IP Address Management in a Modern Dynamic Network
Mar 14, 2023
IP address management in the era of IoT and BYOD is a total torment, especially for growing organizations. In this episode of On-Premise IT Podcast, recorded at Tech Field Day event in California, delegates Aaron Conaway, Jeffrey Powers and Michael Davis take the lid off the pain points of managing DNS and DHCP in large networks. Learn about the current methods of network administrators, and know why they are inefficient at best, and messy at worst. Hear the experts give their verdict on what could be the most elegant solution for administrators.
The IT Industry Is Doing Better Than It Seems
Mar 07, 2023
There’s a lot of bad news coming out of the IT industry, especially large service providers and suppliers, but overall things aren’t as bad as it sounds. This episode of On-Premise IT features Andy Banta, Nico Stein, and Geoff Burke, who will be attending Tech Field Day this week, discussing the state of the industry with Stephen Foskett. Many tech companies, especially hyperscalers and online services, over-hired, and over-purchased, during the pandemic and have handled the resulting pull-back and layoffs particularly poorly. But supply chain constraints are normalizing and companies outside Silicon Valley are purchasing and hiring again, and this can help offset these losses. Leading-edge startups and new technologies continue to see investment and uptake, and these are the companies that will lead the industry out of this tech recession.
Is Encrypted Traffic Monitoring Worth It?
Feb 28, 2023
In the race to make our users safer, have we reduced our visibility? Encrypting traffic with TLS everywhere means that our users can use online banking and protect their privacy without worry of theft or eavesdropping. However those same protections also obfuscate malicious traffic and keep our security personnel from finding attackers. Should we implement newer methods of analyzing encrypted traffic? Is it reliable? Or are we just guessing? In this episode, Tom Hollingsworth, Jasper Bongertz, and Dominik Pickhardt discussed whether or not encrypted traffic monitoring is really worth it.
Even though most of the technologies and infrastructure elements are the same, the nature of edge comping is entirely different. This episode of On-Premise IT features Edge Field Day delegates Enrico Signoretti, Allyson Klein, and Alastair Cooke discussing these differences with Stephen Foskett. Many edge environments look a lot like the datacenter, with servers, switches, storage, virtualization, and more. Other environments resemble clouds, with Kubernetes, hyper-converged, and containerization. All of these technologies are being deployed in a different way, and they are transformed by being adapted to use at remote locations.
Uniqueness is the enemy of consistency. We spend our time building custom environments and then find our efforts to automate them fail. In this episode of the On-Premise IT Podcast, John Kilpatrick, Jody Lemoine, and Vince Schuele highlight all the reasons why we’ve spent our careers tailoring the needs of the infrastructure to the workload. They also discuss why it’s important to understand how things need to be accomplished and how automation and orchestration systems need to be tuned to work with our design philosophies. Learn why you shouldn’t build snowflakes but also how to avoid being buried under a blizzard of design decisions.
XDR is the latest exciting solution that will fix all of your security woes. It will break down siloes in your organization and reduce response time to intrusions. But is it a tool that only the security team can use? Or is it something that requires buy-in from the whole organization? In this episode, Tom Hollingsworth, Zoë Rose, and Dominik Pickhardt discuss XDR and how it can be leveraged by the entire organization to help secure your assets and users.
Storage as a Service is More Than Financial Engineering with Pure Storage
Jan 31, 2023
Although the move from capital to operational expenses is important, many organizations expect more than financial engineering from their storage as a service. This episode, sponsored by Pure Storage, is a discussion of the various ways storage can be delivered and consumed as a service. Taruna Gandhi of Pure Storage discusses this premise with Chris Grundemann, Max Mortillaro, and Stephen Foskett. First we tackle the question of capital versus operational expenses and alignment of business and IT. Next we consider whether companies need ownership of storage or just access to appropriate capacity. Finally we discuss the creation of a true managed service and the transition to the cloud.
It’s Time to Repatriate Applications from the Cloud
Jan 24, 2023
Although the definition of hybrid cloud is loose, it’s inescapable that organizations are getting smarter about locating applications on-premises as well. In this episode of the On-Premise IT podcast, Michael Levan and Jon Myer join Stephen Foskett to discuss on-premises IT. It can seem that this discussion is driven by cost, but companies are actually getting a lot smarter and considering how to make the best use of resources in any location. In addition to on-prem versions of cloud platforms, OpenStack is still being deployed and OpenShift allows transparent location of applications. For Kubernetes applications, Rancher, Portainer, Lens, Arc, Anthos, and more. Cloud Field Day includes presentations from SoloIO, Forward Networks, and Fortinet, and all of these are important solutions for modern cloud as well. Cloud technology is a breath of fresh air for the datacenter and the people who work in modern IT, as applications are repatriated to the modern hybrid cloud.
Enterprise IT Doesn’t Care About A Recession
Jan 17, 2023
Is enterprise IT capable of weathering a recession better than any other area of business? The vendors of the technology would have you believe that they’re not going to be affected by a coming slowdown in purchasing. The analysts say that there is trouble brewing on the horizon. What is it about infrastructure that makes it more likely to be a safe bet? In this episode, Tim Bertino, Ethan Banks, and Tom Hollingsworth discuss the various ways that investing in tech helps grow business but also the potential pitfalls of companies that think they’re bigger than the economic indicators.
Hybrid Cloud is Evolving Into the Multicloud with NetApp
Jan 10, 2023
The whole concept of the cloud is evolving, from traditional datacenter to cloud-native applications to the next generation of hybrid cloud. This episode of On-Premise IT, sponsored by NetApp, features Arjan Timmerman, Vuong Pham, Stephen Foskett, and Phoebe Goh discussing the evolved cloud. Data gravity and sovereignty is leading enterprises to reconsider the platforms chosen to host their applications. Although terms like hybrid cloud, private cloud, and multicloud are changing, customers care more about functionality and practical applications. Data must be made available both inside and outside the datacenter but it must be managed and protected in these locations and comply with regulations and industry best practices. Businesses are increasingly using multiple cloud services as well, from public IaaS to their own datacenter Kubernetes environment. This makes it especially difficult when considering data services and tools since they must support a wide variety of platforms.
Flash Memory Won’t Replace Disk Drives, Let Alone Tape
Jan 03, 2023
Flash maybe the shiny new toy that enterprises are beguiled with, but it’d be wrong to prophesy that flash would put older technologies like disk and tape out of business. In this episode of On-Premise IT Podcast, host Stephen Foskett talks with guests from the IT world – Jim Czuprynski, Richard Kenyan, and Ray Lucchesi about the growing prominence of solid state drives and what it means for disk and tape. Learn how the older storage media are keeping pace with the meteoric rise of flash, and if there will come a time when these older options will fall out of favor.
Edge computing is getting more real every day. What emerged as a viable architecture to support distributed computing has now taken center stage. So, as it stands now, how do we define edge? What technologies make up the edge in 2022? Is it truly coming together as a computer paradigm or is it just another term that came out of the marketing mill to disappear tomorrow or be used for something else?
Filmed in Santa Clara, California, in this On-Premise IT Podcast, Stephen Foskett, and Tech Field Day delegates Voung Pham, Sr Solutions Architect, Ken Nalbone, Solutions Architect and Joey D-Antoni, Principal Consultant, answer all these intriguing questions about the edge making the concept hopefully a little bit clearer.
Wireless is the new edge. Almost every client out there, whether a laptop or phone or IoT sensor, uses Wi-Fi to connect to the network. What happens when security gets in the way? How can we keep our devices safe but ensure that users or headless systems are capable of connecting without issue? How should our security team be involved in the discussion? Do you agree that Security Isn’t The Wi-Fi’s Fault?
Enterprises Should Be Looking for CloudOps Devs
Dec 06, 2022
A major hurdle in the path to deploying cloud-based technologies is skill shortage. It’s a crisis many organizations are struggling with, and it is slackening their cloud progress. The story behind this makes perfect sense. Modern CloudOps isn’t the same as it was in the starting. With the advent of hyperconnectivity, cloud computing is about adopting rapidly to new technologies. That demands high levels of cloud computing skills. The reason that is in short supply is because cloud professionals are on a steep learning curve. They are upskilling as the industry is evolving. This gap can potentially put a pause on enterprises’ digital transformation projects.
In this On-Premise IT podcast recorded at the recent Cloud Field Day event in Silicon Valley, IT experts Gina Rosenthal, Phoummala Schmitt and Becky Elliott put a finger on what precisely entails CloudOps and what skills enterprises should be looking for.
The Initial Journey to the Cloud is Over with NetApp
Nov 29, 2022
After nearly two decades since the first use of cloud computing in modern IT, organizations have completed the first phase of their journey into cloud, and now, the next stage of the transition begins. In this episode of On-Premise IT podcast, brought to you by NetApp, host Stephen Foskett and panelists, Joey D’Antoni and Max Mortillaro, and Jeff Baxter from NetApp, discuss the unfolding of the cloud transition. Although the migration to cloud part is over for the most part, the journey hasn’t been the same for every company. Not every organization’s itenerary was packing and leaving for the cloud. There are organizations that consciously chose to leave a section of their workloads on-premises, or maybe even move things back on-premises after running them on public cloud because that made the most sense. But whatever curve organizations followed, it appears that they have worked out what and whether to migrate to cloud, a decision that was shrouded in ambiguity in the early stages.
It’s Too Hard to Collaborate in Automation
Nov 22, 2022
Ineffective collaboration between teams in automation leads to lack of clarity that translates to pointlessly doing the same work over and over again. In this episode of On-Premise IT Podcast, presented by RackN, host Stephen Foskett delves into this problem with Calvin Hendryx-Parker, Keith Townsend, and Rob Hirschfeld, Founder and CEO at RackN to discuss ways in which the walls between teams can be knocked down. RackN sees reusable automation as the way to reduce toil and improve communication. RackN builds automation workflows that are universally usable across teams, so that every time a new feature is added, or a bug is fixed, it is accessible to all teams. Listen to this podcast to learn how collaborative automation saves time and effort, and how RackN achieves it with IaC automation.
Hybrid Work has Vastly Increased the Enterprise Threat Surface
Nov 15, 2022
The threat surface of our enterprise has increased dramatically because our enterprise now includes remote workers. In this episode of the On-Premise IT podcast, Jasper Bongertz, Jon Myer, and Evan Mintzer discuss the ways in which our organizations face increased challenges with securing workers that are outside our enterprise defenses. Find out how organizations are trying to solve issues with personal devices as well as home internet traffic filtering. Hear how professionals are using tools more effectively to prevent attacks and reduce dwell time for potential threats.
Security has to be Integrated into the Entire IT Stack
Nov 08, 2022
Modern storage systems increasingly have data security capabilities, but these need to be part of a complete security solution to be effective. This episode of the On-Premise IT podcast, sponsored by Dell Technologies, takes on the premise that security must be integrated throughout the entire IT stack to be effective. Join Stephen Foskett of Gestalt IT, Pete Gerr of Dell Technologies, Enrico Signoretti, and Girard Kavelines as they consider the state of the art in security. No matter what security capabilities a system has, they won’t protect the business unless they are integrated with applications and the end user workflow.
Although object storage goes back decades in the enterprise datacenter, it’s nowhere near as dominant as it is in the cloud. That’s the question Stephen Foskett puts to some of the Storage Field Day delegates in this episode of the On-Premise IT podcast. Glenn Dekhayser suggests that AWS S3 has created a data gravity well that is turning into a data singularity for object storage in the cloud, and this might drive adoption of this technology in enterprise. Richard Kenyan came into storage later and saw much greater adoption of enterprise object storage, so maybe we’re just overlooking it? Jim Czuprynski comes from a DBA background so he always thinks of storage as objects, and suggests that databases are the logical future for storage. What is an object store really? Is it a database? Is it about the metadata and structure? Or is it defined by being application-integrated? All of these questions cloud the market, leaving us to overlook the vast world of enterprise object storage.
Subscription Services are Strangling Enterprise IT
Oct 25, 2022
Subscriptions have moved up from meal kits and streaming services to all of technology. In just a few short years, enterprise IT has transitioned into a service-focused industry that makes a lion’s share of its revenue from subscription services. The subscription model instantly got a wild reception because now you can subscribe to anything you want. But on the flip side, now you will never own anything. The subscription economy has been on the rise for a while now and many would argue its merits, but speaking specifically of IT, this everything-as-a-service also has a darker side that is still largely undiscovered.
That is the premise of today’s discussion. In this On-Premise IT podcast, Stephen Foskett and our panel of delegates from the IT world, Andy Banta, Vuong Pham and Pete Robertson, take the subscription revolution, one of IT’s biggest business trends and distill it to reveal the less-exciting side of it.
The Future of Datacenter is Serverless
Oct 18, 2022
Despite all the hardware and software changes that have come to the datacenter, the server has remained the primary unit. But new technologies, from CXL and silicon photonics to virtualization and containers, is challenging the entire concept of a server. In this episode, Craig Rodgers, Chris Reed, and Chris Hayner join Stephen Foskett to consider the end of the server itself. The first step in this evolution was the move to external storage, followed by blade servers, and now complete disaggregation and composability. On the contrary, a virtual machine or container can already be seen to be a logical server, while microservices and webscale eliminates the very idea of a server. Are we building a new mainframe or rack-sized servers? Will people really adopt this concept or is it completely the wrong direction?
Network Access Can’t Be Controlled From The Edge
Oct 11, 2022
The network has moved from the edge to the core and on to the cloud. We have added intelligence throughout and tried to make things easier for professionals to manage. However there are still areas that need improvement. As we increase the ways we can extend our network we need to ask ourselves if we should. The best example is the network edge. In this podcast we discuss the idea that Network Access Can’t Be Controlled From The Edge.
The world of wireless is advancing quickly and new protocols and hardware are coming into the market. Manufacturers are pushing the latest and greatest technologies around Wi-Fi 6E. Are you ready to embrace it? Should you be looking at it today? Or is this more of something to adopt later? In this episode we debate the premise that You Don’t Need Wi-Fi 6E Today.
It’s The End of VMworld as We Know IT
Sep 27, 2022
To the IT community, the VMworld is an Ops conference like no other. But things are about to change. The in-person VMworld events stopped with the pandemic back in 2019. Now VMware is back with another conference, but it’s not quite the same as VMworld. A lot has changed in this time. VMware was acquired by Broadcom for one. And on the heels of that, VMworld became VMware Explore. Is it just post-merger rebranding, or is this the end of VMworld as we know it?
Recorded at VMware Explore 2022 US in San Francisco, this podcast explores the question that the entire IT community is scratching its head about. Brian Knudtson, Gina Rosenthal and Alastair Cooke join Stephen Foskett in this discussion that tries to define the ways in which VMworld has changed and what to expect from the future VMware Explore conferences.
IT Infrastructure Companies Don’t Understand Developers
Sep 20, 2022
Enterprise IT companies are fixated on developers as a new market for IT infrastructure products, but it seems like they don’t even know what the term means. This episode of the On-Premise IT podcast brings Joep Piscaer, Nathan Bennett, and Calvin Hendryx-Parker together with Stephen Foskett to talk about the new world of developer-focused enterprise tech.
Storage Admins Aren’t Ready for Infrastructure as Code
Sep 13, 2022
Storage has historically not been compatible with modern infrastructure as code concepts, so today’s administrators probably aren’t ready for this change. In this episode of the On-Premise IT podcast, sponsored by Pure Storage, Larry Smith and Jim Czuprynski discuss the evolution of storage with Anthony Lai-Ferrario of Pure Storage and Stephen Foskett of Gestalt IT. Moving storage forward requires process and organizational changes as well as the application of new technologies. But of course an evolution to storage as code also requires technical changes like the creation of complete APIs and integration with modern frameworks. What can today’s storage admins do to get ready? Start looking at storage as a service to be provisioned and matched to an SLA, embrace public cloud concepts and technologies, and try to script and automate every component of the infrastructure stack. Anthony recommends starting with the interfaces supported by your storage solution, then pick a tool that allows you to start working, and finally put scripts and tools into version control. This helps bridge the gap between manual configuration and infrastructure as code.
In the modern world of enterprise IT it’s important to keep an eye on your systems. But the amount of data that is being generated makes it very difficult to know what’s happening and how to fix issues. Companies are touting the latest advances in ML and AI to help solve the issue but does Observability Need to be Smarter?
Real Cloud Hybrid Storage Doesn’t Exist
Aug 30, 2022
There is a long-standing dream of hybrid cloud that combines the best of datacenter and cloud architecture, but does this exist for storage? In this episode of the On-Premise IT podcast, Enrico Signoretti and Chris Evans join Molly Presley of Hammerspace and Stephen Foskett of Gestalt IT to discuss the reality of hybrid cloud storage. Like a hybrid car, the dream of hybrid cloud is to bring the best of two different infrastructure approaches together in a truly unified fashion. But most hybrid cloud solutions fail deliver on this promise in terms of technology, protocols, management, security, and usability. Can a true hybrid cloud storage solution exist? And what will it take to bring this to market? Hammerspace is creating a global data environment rather than simply hybrid cloud storage, with the goal of enabling IT to manage data while users simply access a single namespace.
VXLAN changed the way we use layer 2 networking in the data center and solved a lot of our multitenancy problems. Today it’s inexorably linked with EVPN. Do you need to use VXLAN with EVPN? Are there alternatives? And how does the cloud change the conversation? In this episode, we discuss the pros and cons behind how EVPN Doesn’t Need VXLAN.
Service Providers Lead the Way in Automation
Aug 09, 2022
The modern enterprise network is excited by the prospect of network automation. But is automation a new idea? Or is the enterprise finally catching up to where other disciplines have been all along? In this episode featuring delegates from Networking Field Day: Service Provider, we discuss how Service Providers Lead the Way in Automation.
The world is more wireless now than ever. No matter where you go you’ll find some form of wireless connectivity. But what about those applications that need to be more reliable? What about those places where you need it to work no matter what? Is wireless reliable enough? Or is wireless not mission critical?
What really is multi-cloud and why does it keep coming up so often in the context of enterprise cloud computing models? If you know anything about cloud computing, you’d know that the hottest thing in the cloud domain right now is multi-cloud – a model companies are incrementally adopting to keep their applications and services distributed across public and private clouds, and now edge cloud platforms. Seeing that that’s the status quo for a majority of the enterprises in cloud, the big question is – is multi-cloud inevitable? Are companies that have not embraced it yet also headed in that same direction? Are all vendors in the future going to have a cross-cloud service portfolio to support the multi-cloud movement? Or is it just another trend that’s having its 15 minutes of fame? On this episode of the On-Premise IT Podcast, Stephen Foskett is joined by a panel of delegates who drills into these questions to bring clarity to the topic of multi-cloud and what the future looks like for this model.
You’re Using the Word Site Survey Wrong
Jul 12, 2022
In wireless we talk about doing site surveys but do we actually know what that means? Are we even talking about the same thing when we mention in as part of a pre-sea engagement or post-sales validation? And who should be doing it? In this episode, our group of wireless experts discuss the premise and whatever or not we are Using the Words Site Survey Wrong.
The Future of Storage Isn’t Purely Hardware or Software
Jul 05, 2022
The development of enterprise storage has historically oscillated between a focus on special-purpose hardware and optimized software. In this episode, brought to you by Pure Storage, Justin Emerson, Justin Warren, and Marc Staimer join Stephen Foskett to discuss this push and pull. Modern storage systems are extremely complicated, with layers of virtualization, different protocols, intelligence, tiered media, and scalability. All of this complexity makes the design of a storage system more difficult, so a company like Pure that can control the entire stack is able to deliver a more efficient system.
Is AI Supplementing Our Skills or Deskilling Us?
Jul 01, 2022
Can you imagine a world without AI? It’s a bit too late for that because in a short time AI has become an inseparable part of our lives. Imagine typing without the autocorrect feature or building a playlist without recommendations. AI has undeniably made our lives easier in a lot of ways, but there’s a debate forming around it. By saving us the trouble of doing certain things, is AI ultimately deskilling us? Is it robbing us of our natural ability to do the mundane things like learning how to spell a word? Some would argue that it is saving us time and bandwidth doing the routine things that take little imagination or intelligence for us so that we can focus on more important and pressing matters? In this episode of On-Premise IT podcast, Stephen Foskett is joined by a panel of industry experts who dissect this premise in search of a black and white answer.
Consistent Security is Very Difficult
Jun 28, 2022
Security is a hard job. We spend our time analyzing our environment and building controls to keep our users and their data safe. As hard as security can feel it’s even harder to apply consistently. Attackers only have to get lucky once. Is your company using security in a reactionary mode? Or are they planning ahead with policies and platforms designed to prevent exposure? In this episode, sponsored by Fortinet, we discuss why Consistent Security is Hard.
The more advanced a technology becomes, the more complex it appears. Is that always the case? Or is the complexity in networking coming from something besides the technology itself? Are there reasons why networking gets more and more complex as the years go by? Are we just doing this to ourselves? Is Networking Too Complex?
Pretty much every application today has an API and scales out using modern infrastructure approaches, but is it a cloud? That’s the question we put to the panel of podcast guests today, as we look forward to Cloud Field Day 14 in June 2022. Certainly APIs are important to cloud applications, but it takes more than an API to be a cloud. The cloud operating model, and as-a-service financial models, are just as important, as are automated provisioning and scaling and hands-off management. All of these things existed historically in enterprise IT but never came together the way they do with today’s cloud services, and this is what makes them unique.
You may have engineering talent on staff but how full are they? Because you must have a full stack engineer, or you’ll never get anything accomplished. What is a full stack engineer? Why are they so in-demand? And why are there no full stack lawyers? Or full stack doctors? In this episode, we discuss whether or not Full Stack Engineering is a Joke.
Enterprise IT has endured a significant number of changes in the past few years. The rise of the cloud coupled with a global pandemic forcing users to work remotely has made stakeholders ask about the need to replace hardware for a location that doesn’t see much traffic. Coupled with the desire of hardware vendors to move to a subscription model, it’s time to ask Does Enterprise IT Matter Anymore?
Sometimes the Best Storage is No Storage
Jun 07, 2022
The line between storage and memory is blurring thanks to Intel Optane technology, and systems equipped with this might not need storage at all. Join Dr. Jawad Khan of Intel as he discusses a real-world system that was able to outperform a high-end solution at a tenth the cost thanks to Intel Optane technology. Justin Warren and Frederic Van Haren join Stephen Foskett to discuss the implications of a system that can keep an entire big data graph in persistent memory and thus does not need as much memory or high-performance storage. Dr. Khan uses Intel’s winning entry at the recent NeurIPS Conference Big ANN Challenge competition as an example, in which the Intel offering returned 4x better CAPEX and OPEX than alternatives.
Companies are reinventing things all the time in IT. In their own fields, when venturing out to new ones, it seems that every company is trying to do things their own unique way, albeit trying to bring something new and incredible to the market, but there is a fallacy to this approach that’s foiling their good intentions. Enterprise IT companies don’t seem to communicate much amongst themselves and that is causing them to repeat each other’s mistakes, develop partial solutions to full size problems, and as secure as they’re in their own silos, wouldn’t it be great if they came together and talked to each other, share their decades of experience, talk technologies, exchange their thoughts and help clear the doubts of one another? As we enter a new era of enterprise IT, it may be time to end this disengagement and communicate more, for the good of their own business and cater better to the customers they’re serving.
People Don’t Realize How Insecure Their Storage Is
May 24, 2022
Most people assume that storage systems are secure, but security is not necessarily part of the design for most storage systems. In this episode, sponsored by RackTop Systems, Marc Staimer and Arjan Timmerman are joined by Eric Bednash and David Hughes of RackTop to discuss the real state of storage security. Although cybersecurity professionals have processes and directives in place in many spots within modern IT infrastructure, most do not have robust security practices within the domain of storage. RackTop sees this as a “doors and windows” problem, with a misplaced sense of perimeter security that does not match modern architecture. How do we secure storage in modern environments? The key to the RackTop solution is to monitor the behavior of storage access and use this to infer their motives and act to intervene. The system then acts based on policy to set up an active defense of the data accessed through the RackTop system.
Given the announcements of just about every company in the industry, it appears that AI requires specialized storage to function. In this special episode, recorded prior to AI Field Day, Andy Banta and Karen Lopez join Stephen Foskett to discuss the relationship between storage, data, and AI. We’ve seen lots of companies, including AI Field Day presenters DDN, NetApp, and WEKA, selling their high-performance distributed storage solutions for AI workloads. But is this just the nature of modern storage or is it truly linked to AI systems? It seems that much of this depends on the specifics of the AI application, whether it is used for training, the size and nature of the data set, and the specific use case. We should also consider what we mean by AI. Is it image recognition, autonomous driving, or a massive data set like GPT-3 or DALL-E? Just as with all such architecture questions, the answer is it depends on the specific use case.
The technology behind VPNs is venerable. Heavy clients that create client networking issues coupled with key exchanges makes users want to throw their hands up in dismay. Applications have integrated security features and moved to the cloud to provide centralized access. Are VPNs even worth it any more? In this episode, brought you by Keeper Security, we debate the premise that VPNs Aren’t Required Anymore.
Developer Advocacy Isn’t Exactly What We Think It Will Be
May 03, 2022
Many companies are trying to leverage DevOps to sell products through developer advocacy, but does it actually work? Just having open source or sharing code on GitHub doesn’t guarantee that a company will get quality engagement, let alone testing or adoption. In fact, a developer focus can be counter-productive if a company wants to become a strategic partner to other companies. Open source works great to spread and develop software, but self-centered developer-focused marketing isn’t the same thing. Is it true engagement or is it a one-sided effort to attract customers and leverage free talent? If companies focus on real developer engagement, advocacy will take care of itself.
Do Enterprises Need Private Cellular?
Apr 19, 2022
Wireless is the new normal for endpoint connectivity. But does it solve all the problems we have with connecting lots of devices over a wide area? Are there places where Wi-Fi doesn’t work as well as other technologies? With the rise of private cellular deployment options IT departments are asking whether or not they should be considering alternatives to traditional Wi-Fi. In this episode, a panel of experts answers the question Do Enterprises Need Private Cellular?
Too Much Security is Just as Bad as No Security
Apr 05, 2022
We all know how dangerous it is to have no security around your important IT assets. We need to keep the users safe and the infrastructure secure. What happens when we go overboard and implement too many polices and tools to protect things? Are we really helping the situation? Or are we just making it worse for everyone, including the people that rely on us to get their work done? In this episode, our panel of security experts discuss whether or not Too Much Security is Just as Bad as No Security.
In 2022, Ransomware Attacks Will Dissipate
Mar 22, 2022
We’ve seen the peak of ransomware attacks in 2021. More and more attackers have started going after critical targets and trying to extort money from their victims. In 2022, will this continue? Or will the responses to the threat landscape have an impact? In this episode, the premise is that in 2022, Ransomware Attacks Will Dissipate.
Storage is not secure, period. Storage has security weaknesses that can open doors to attackers to easily hack into sensitive data stores, and this is not news. Security vulnerabilities in storage have existed for years, but not much has been done about it. Today, on the face of rising ransomware attacks, when data security is supremely important, storage is still the weakest link in security. In this On-Premise IT Roundtable Podcast, Stephen Foskett and his guests discuss this situation to find a strategic solution to poor security in storage.
Firewalls Don’t Seem To Belong Everywhere
Mar 15, 2022
The typical approach to security is to just put some firewalls in place and create a perimeter, right? While that might work well in very specific enterprise settings reality is more nuanced, especially in a service provider. In this episode we discuss why Firewalls Don’t Seem to Belong Everywhere.
Next-Generation Storage is Not Storage Anymore
Mar 08, 2022
Looking at the new companies in enterprise storage, it’s obvious that storage just isn’t storage anymore. Even more traditional storage vendors are adding software as a service and data services, and these are major selling points rather than the ability to store data. At Storage Field Day in March 2022, none of the companies is presenting block and file storage, SAN or NAS, or any of the other traditional storage technologies. Is the storage array “a solved problem” or are we just looking at the new application-centric market? Pure Storage is perhaps the most traditional storage company presenting this week, but they’re coming to talk about Pure1 and storage as a service. The same is true of companies like Hammerspace and SIOS that are focused on data distribution and availability. VAST Data, RackTop Systems, Fungible, and MinIO are developing storage solutions but they are aimed at special use cases, like data analytics, security, high-performance, and cloud. Then there’s Intel, who is coming to Storage Field Day to give an update on their Optane technology, which promises to fundamentally change the entire storage paradigm. Clearly, next-generation storage isn’t storage anymore, it’s something altogether new and different!
Reactive Security Controls Are Not Enough
Mar 01, 2022
The modern world of security is transforming quickly. Attackers are leveraging new tools and new ways to invade systems and capture data or demand ransoms. The traditional method of securing your enterprise isn’t enough any more. If you’re thinking in a reactive way you’re falling behind and may never catch up. In this episode we discuss all the ways that Reactive Security Controls are Not Enough.
You’ve probably heard of Web3 but how do we actually define it? We know that it has something to do with connecting the web to a blockchain but the true definition of Web3 is unclear. We can all agree that there are exciting aspects of Web3 but why are we doing this? We can already do a lot of things without Web3, from online banking to distributed applications. Do we really need a blockchain? Is being used for what it was intended for? In this episode we take on the premise that Web3 is bunk.
The Hypercloud is More Important Than the Cloud
Feb 15, 2022
As public cloud becomes more popular and competitive, developers are adopting cross-cloud platforms that can be thought of as a hypercloud. One driver for this is high availability, but many platform decisions are driven by a desire for more advanced features. Still, we must be aware of data gravity and the complexity involved with these platforms. Kubernetes is the ultimate example of a hypercloud platform, but many vendors are producing compelling tools that sit above the cloud and replace the proprietary offerings of public cloud providers. Is it time for the hypercloud? Is the traditional cloud dead?
Companies Mentioned: Red Hat OpenShift, Kasten by Veeam, NetApp, Pure Storage, IBM cloud satellite, VMware
How many times have we heard of an exciting new advance in enterprise IT that is poised to revolutionize the way we do things only to see it fizzle out? Perhaps it’s bad business practice but what about those that were acquired by bigger companies and disappear? Is it a bit of bad luck? Or is it something more sinister? In this episode we explore the idea that Disruptive Technologies Get Buried.
2022 Will Be the Same For Technology As 2021
Jan 11, 2022
Have you ever watched a movie sequel that felt just like the previous one but with slightly newer references? That’s how 2021 felt compared to 2020. In 2022, will we see the technology landscape shift dramatically? Or will it feel like the past twelve months but with faster speeds and lower costs? Join a group of forward-looking thinkers on this episode as we decide whether or not 2022 will be the same year for technology as 2021 or not.
Can Disaggregation Solve Lock-In Problems?
Dec 21, 2021
Disaggregation is the future of networking. Decoupling the operating system and software from the hardware will give us the freedom to build the networks we’ve always wanted to use without the restrictions of bundling bad parts together. But does this freedom really exist? Are we going to have the future we’ve always dreamed of? Or does disaggregation lead to other lock-in problems? Join us for this episode of the On-Premise IT Roundtable where we discuss the advantages and issues with network disaggregation.
A Service Provider Network is Not Your Enterprise Network
Dec 07, 2021
Networking is networking, right? It’s all the same routers and switches that move packets between locations. How different can they really be? Join Tom Hollingsworth as he brings on a panel of service provider networking experts to dissect the differences between the traditional enterprise network and a service provider or transit network. What are the ultimate goals of your infrastructure? And who is the customer? When you finish listening to the episode, will you agree that A Service Provider Network is Not Your Enterprise Network?
You’re Not Ready to Deal with Unstructured Data
Nov 30, 2021
Unstructured data is varied in nature and continues to grow, especially in our new world of telematics, multimedia, and AI. In this sponsored episode, Amy Fowler of Pure Storage joins Gina Rosenthal, Enrico Signoretti, and Stephen Foskett to discuss the challenge of dealing with unstructured data. With IDC estimating that 85% of user data will be unstructured, the challenge isn’t to provide structure but to develop solutions that can handle data as it is. Today’s object storage systems offer extreme scalability, high performance, cost-effective, and offer API-based integration with applications to offer a world of possibilities for analytics and insights. This is an area where developments in data storage systems can help improve business outcomes and potentially benefit the world at large.
When designing Wi-Fi networks, there are a number of buttons that you can push to help make the perfect setup to maximize coverage and throughput. Depending on the application you use to do the work you may find some buttons easier to use than others. However, the most popular value to change seems to be the data rate. Is this a good idea? Or are you just setting yourself up for failure? In this episode, will we find out that Data Rates Don’t Change Cell Size?
Containers were supposed to be lightweight and stateless, unlike the heavy virtual machines that dominate today’s datacenter. In this first on-premises episode in two years, Stephen Foskett puts the question of heavy containers to Ned Bellavance, Nico Stein, and Nathan Bennett. Containers were intended to abstract system services rather than hardware like virtualization, and this results in their light and stateless nature. But this is a result rather than a necessary quality of containers, and companies are increasingly deploying heavy containers with multiple application components and data. As microservices applications rise, so do container management systems, and these are increasingly including light, heavy, and system service containers for networking and storage. Should heavy containers exist?
The Cloud Is Finally Ready For the Enterprise
Oct 26, 2021
The question of cloud readiness has plagued the enterprise for a decade, but we have finally gotten to the point that enterprise IT is coming to the public cloud. Some of the drivers for cloud adoption include the pandemic and work-from-anywhere as well as the current shortage of chips and equipment. But there is a pull for cloud as well, now that applications have evolved to the point that they work better in the cloud enterprises are making the switch. Applications like collaboration and email belong outside the firewall, and cause more trouble than they’re worth inside the datacenter. Enterprises today have a difficult decision to make on a per-application business: Does this belong in the datacenter? Hybrid cloud? Public cloud? Software as a service? And there are many considerations to make, from security to ownership to cost.
Join us on November 3–5, 2021 for another exciting Cloud Field Day event, where you can learn more about the premise of this episode. Also, follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Data Protection Software Doesn’t Solve Ransomware Issues
Oct 12, 2021
The rise of massive ransomware outbreaks has led to a number of strategy changes, including the pivot of disaster recovery and business continuity companies into marketing their solutions as an effective way to stop ransomware. But are these products really going to solve the issue? Or just treat the symptoms of the wider problem?
Join us on October 20–22, 2021 for another exciting Security Field Day event, where you can learn more about the premise of this episode. Also, follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
You Need To Get Out Of Your Comfort Zone If You Want To Learn
Sep 28, 2021
Techies are inherently curious, and magic happens when they step outside their area of expertise and really dig into something new. That’s what we’re talking about this week on the On-Premise IT Roundtable with a set of delegates from Tech Field Day 24. We’re mixing disciplines and breaking down the walls between industry verticals and learning about everything from IoT to storage to container management to smart networking. When we see things we don’t know, we can spot novel approaches and try to bring them to our own realm.
Will Working From Anywhere Destroy Enterprise IT?
Sep 14, 2021
With a hybrid work model now a reality for a growing number of enterprise employees, IT departments are now rethinking the way they deploy their resources. One of the big questions is what happens during the next upgrade cycle. Are IT budgets going to be cut? Do we need full-featured devices in an office no one is going to visit? Should that spending be shifted to the end users or to cloud services? Will working from anywhere be the end of enterprise IT?
Be sure to check out Networking Field Day, which airs September 14-16th, 2021. Also, follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Service Mesh Is a Use in Search of a Problem
Sep 07, 2021
Today, it seems like every cloud vendor is pitching a service mesh, but is this really necessary? In this episode, Calvin Hendryx-Parker, Ned Bellavance, and Jason Benedicic join Stephen Foskett to discuss the value of a service mesh to modern application stacks. In theory, a service mesh facilitates communication between services and microservices in modern applications. It also provides a data and management plane to today’s scalable applications and can improve security and flexibility. But a service mesh is built of proxies, and this adds multiple layers of complexity to already-complex containerized application platforms. Before companies deploy a service mesh, they should consider whether the benefits outweigh the drawbacks.
The history of information technology is bound up with hype. In this discussion, Tech Field Day 24 delegates Paul Stringfellow, Robert Novak, and Craig Rodgers discuss the endless parade of hype with Stephen Foskett. Whether it’s 5G or edge or disaggregated infrastructure or the cloud, every technology goes through a wave of hype before it becomes reality. How can we tell what a technology will really deliver and what’s just marketing talk? The famous Gartner Hype Cycle is more about marketing than the reality of technology adoption, and a product can be good or bad regardless of the amount of hype surrounding it. That’s one thing the delegates are watching for during the Field Day events! Amazingly, hype can even be productive in pushing change and adoption of new concepts, and even product development to meet these inflated expectations.
Be sure to join us for Tech Field Day September 1st & 2nd, 2021! Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
When is it Time to Scrap Everything in Storage and Start Over?
Aug 17, 2021
In today’s ever-shifting enterprise IT landscape, when is the right time to completely respec your storage setup? Should you tear it all down and start over? Should you learn from the past and keep some of the old ways?
When it comes to innovating storage practices in a changing world, there are a lot of opinions on how to do it. Join host Stephen Foskett and his round table of esteemed guests from around the storage industry as they discuss these questions and more.
It’s Time to Embrace the Bottlenecks in Storage
Aug 03, 2021
Storage administration has always been about fighting bottlenecks, but today’s architecture means it’s time to embrace them instead. In this episode, our panel discusses the premise that it’s most important to match bottlenecks to the position in the infrastructure stack and application. One reason for this is the amazing bandwidth and low latency we have, thanks to Optane PMEM, NVMe, flash, and other technologies, but another is the emergence of new technologies that enable disaggregated architecture, moving storage closer to the application.
Technical Debt Will Bankrupt Your Modern Apps
Jul 27, 2021
Traditional IT architecture is a poor fit for modern applications and holds us all back. So-called technical debt can pile up, forcing companies to spend time and money supporting legacy infrastructure for the sake of keeping it running rather than moving the company forward. In this special episode of the On-Premise IT Roundtable podcast, sponsored by Pure Storage, we discuss the modern application stack and how technologies like object storage and Kubernetes are allowing companies to break free from their technical debt once and for all.
High Availability Is the Worst Reason to Go Multi-Cloud
Jul 20, 2021
Multi-cloud has quickly become a buzzword in the enterprise IT industry, but as organizations start to adopt multi-cloud strategies, their reasoning behind the switch may not justify the act itself. Although a multi-cloud approach can increase an organization’s availability, that alone doesn’t warrant introducing the complexity and costs that come with multi-cloud. Listen to this episode of the On-Premise IT Roundtable Podcast to hear how host Tom Hollingsworth and our esteemed guests unpack the concept of multi-cloud and uncover more impactful reasons than just high availability to consider adopting multi-cloud.
Is Enterprise Wireless the Key to Bringing Workers Back to the Office?
Jul 06, 2021
The world is starting to recover from the pandemic. People are starting to ask if it’s time to return to the office. And more than a few aren’t ready to go back. At the same time, employers are trying to find a way to get people back, and they’re asking questions about the infrastructure. Could new fast wireless access points be the tipping point? In this episode, we discuss whether or not new wireless networks are enough to bring workers back to the office.
Be sure to join us for Mobility Field Day July 14–16, 2021! Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Cloud is the way that we consume our media, content, and even perform our jobs. We rely on it daily to live our lives. And cloud relies on services to ensure prompt delivery and reliable service. We know networking and storage are critical. What about the other services that we don’t always see? In this episode, we explore the importance of DNS to the way modern cloud operates and decide whether or not DNS is critical for the operation of modern cloud. Special thanks to BlueCat Networks for their partnership with this episode.
Enterprise IT Has Transformed Public Cloud
Jun 22, 2021
Cloud computing has transformed enterprise IT, but it is perhaps less obvious how traditional concepts like data protection and high availability have infiltrated the public cloud. In this episode, Cloud Field Day 11 delegates Max Mortillaro, Adam Post, and Thom Greene join Stephen Foskett to discuss the many ways that cloud computing has adapted to the demands of the enterprise. This has been called “inside-out” computing, since it takes the things that have been run inside the datacenter out to the public cloud, as opposed to the “outside-in” approach, which brings concepts like Kubernetes and DevOps to the enterprise.
Be sure to join us for Cloud Field Day June 23-25, 2021! Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Humans Aren’t Smart Enough to Manage Storage
Jun 15, 2021
With so much data at the average organization’s disposal, the task of storing it all — and storing it in an effective manner — seems to be quickly becoming more than humans can handle on their own. In this episode of the On-Premise IT Roundtable podcast, sponsored by Pure Storage, host Stephen Foskett is joined by Pure’s Prakash Darji, as well as IT luminaries, Gina Rosenthal and Chris Grundemann, to discuss how IT organizations can smartly manage their storage. SPOILER ALERT: it looks like it’s time to let machines like Pure Storage’s FlashBlade do the storage management work for us.
Composable Infrastructure Will Save the Data Center From the Cloud
Jun 08, 2021
With the rise of Kubernetes and other completely cloud-based deployments, on-premises data centers seem to be on the way out to make way for the scalability and dynamism of the cloud. Innovations in data center and AI composability showcased by Xilinx and Micron at the last Tech Field Day event as well as those displayed by Liqid at the recent AI Field Day event, however, might say otherwise. By defining infrastructure operations through software — much like how the cloud does — and building it on top of faster, more capable hardware, data centers can act at the speed of business, saving them from being outdated and outmatched by cloud-based infrastructure.
We Have Reached the End of the SSD Era
Jun 01, 2021
The enterprise storage industry has passed from the hard disk era through hybrid to all-flash, but what comes next? In this sponsored episode, Moshe Twitto of Pliops joins Enrico Signoretti, Chris Evans, and Stephen Foskett to discuss the end of the SSD era. Considering the durability issues of NAND flash, the limited value add of most SSDs, and the emergence of disaggregated infrastructure, the industry is on the cusp of a radical transformation. While disks and flash SSDs will still exist, these will have less relevance architecturally as intelligence moves closer to the storage media. The next architectural step is to develop flexible solid state key-value store devices rather than continuing with block storage.
AI-powered applications are rolling out everywhere, but enterprises are really not ready for this shift. From lack of infrastructure to power training and inferencing, to a skills gap to manage data and AI, to a lack of readiness to deal with diversity and ethics, there are many holes in enterprise IT. Engineering teams tend to approach all applications from a technical perspective, but AI brings more to the table, demanding a deeper knowledge of bias and legalities than most other applications.
Be sure to join us for AI Field Day May 27-28, 2021! Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
Traditional Network Monitoring Tools Can’t Keep Up With Modern Environments
May 11, 2021
The enterprise network is full of data. We’ve spent the past decade trying to learn how to unlock those secrets. However, the recent shift away from on-premises networks to cloud-focused solutions means much of the progress we’ve made obsolete. We live in a world of telemetry and analytics and data lakes, not SNMP and simple measurements. In this episode, brought to you by Cisco, we take on the premise that traditional network monitoring tools can’t keep up with modern networks.
Are Our Networks Just Becoming Cloud Service Providers?
May 04, 2021
We spent hours and hours, dollars and dollars to create state-of-the-art networks, ensuring that all the computers in our companies could talk to each other. Now, it seems like nobody uses them… unless they’re accessing their cloud-based Infrastructure-as-a-Service platforms. Have networks just become on-ramps to the internet interstate? As we ramp up for Networking Field Day 25, Tom and Co. dive into the topic of on-premises networks and their function in this cloud age on this episode of the On-Premise IT Roundtable Podcast.
Multicloud Is the True Driver of SASE and SD-WAN Adoption
Apr 27, 2021
SD-WAN has been the truest example of implementing software-defined networking in the last decade. With the rise of SASE, we’re seeing the integration of software-defined security as well. SD-WAN helped us tie our branch offices together in ways we never thought possible. But the environment of 2020 and beyond doesn’t look like a traditional office any longer. Workers are leveraging the cloud more and more. Does that mean that SD-WAN and SASE adoption have slowed down? Not at all! So what gives? In this episode brought to you by Cisco, we explore the idea that Multicloud Networking is the True Driver for SD-WAN and SASE Adoption.
Is It the End of the Server as We Know It?
Apr 13, 2021
Dynamic. Extensible. Scalable. These are just some of the key traits today’s companies need from their infrastructure, and frankly, it seems like the server as it has been known is not built to the same specifications. Add on top of that the upcoming announcements by Micron and Xilinx, covered in the next Tech Field Day event at the end of April, and it’s apparent that major changes are in store for servers. Check out this episode of the On-Premise IT Roundtable to hear how these changes are affecting server infrastructure and, ultimately, answer the question: is it the end of the server as we know it?
Wi-Fi networking is the way we do business today in the enterprise. Our workstations, printers, and mobile devices are all connected with the technology to allow us to roam around without wires. But what about 5G? The newest generation of mobile broadband technology is rapidly being deployed and adopted. Recent rule changes with the FCC have enabled providers to offer a private version of LTE and soon 5G to change the way we offer services to our users. But is it enough to topple the dominance of Wi-Fi in the enterprise? In this episode, we explore whether or not Private LTE will displace Wi-Fi.
Do We Need To Toss Out Hacked Software?
Mar 23, 2021
Modern software is not invulnerable. We constantly find ourselves under attack from a variety of angles trying to sneak in to steal information or cause harm. We can recover from these hacks, but what happens to the software that was violated? Do we need to patch it? Or do we rip it out and replace it with something else? In this episode, we ask if we need to toss out hacked software.
Is Security Just A Bunch of Products?
Mar 16, 2021
Security is a lot of things. It’s the way we protect ourselves and our information. It’s a process and a need. But can we boil it all down to some product? Are we able to just say that we can order some software and a piece of hardware to run it on and we’re protected? Does the Security Check Box Product list really keep us safe? Or is there more to it? In this episode, we discuss whether or not Security Is Just A Bunch Of Products.
Pure Storage Spotlight: Does Kubernetes Even Need Storage?
In this sponsored episode by Pure Storage, we’re discussing Kubernetes and storage. Specifically, we’re questioning whether Kubernetes even needs persistent storage or if it’s just about the data being kept in containers. With modern applications moving beyond our traditional ideas of what storage should look like, what does that mean for the cloud and new technologies. Join us as we figure out where storage is headed from here.
With the recent departure of Pat Gelsinger and other executives, VMware finds itself in uncertain times. Rumors have swirled for months about plans to spin it out, leverage the stock to take parent Dell public again, or even purchase the stake outright and make them a part of the organization proper. With deadlines looming later this year, what happens to the tech titan when time’s up? Join us on this episode of the roundtable as we try to determine the fate of VMware.
XDR Isn’t Enough for Your Security Needs
Feb 23, 2021
Cisco Spotlight: Is XDR Enough for Your Security Needs?
What is XDR exactly? Is it a specific tool? Is it a term that encompasses the entirety of security? Or is it just a buzzword that sells more security devices? How can XDR help you understand the forces at play in your enterprise? Can XDR offer you the capabilities to enhance your security and make it easier for your operations staff to keep everyone safe?
Join us in this episode of the On-Premise IT Roundtable Podcast, sponsored by Cisco, as we try to find out if it’s true that XDR Isn’t Enough For Your Security Needs.
For more information on Cisco’s solution, check out Cisco SecureX!
Metadata is not data… or is it? Should it be treated and protected like data? For example, using metadata, phone providers may not know the content of your conversation, but they can certainly tell who you called and when. This raises the issue of security and how bad actors could use metadata to their advantage.
In this episode, we tackle the metadata vs data debate and discuss the distinction between these categories of information.
The world of networking and security are being wrapped up together with the advent of Secure Access Service Edge, or SASE.
Is this a new technology that is going to revolutionize secure connectivity for businesses and users? Or is it a new coat of paint on an existing product line? In this episode, we try to figure out if SASE is just a marketing term or if there’s something really there.
On February 15, Bob Swan is stepping down from Intel as the CEO, and Pat Gelsinger will take his place.
The former CTO steps into a company embattled by competitors and the loss of large customers. Gelsinger has the leadership skills and the perspective to lead the CPU giant but does he have the time to make the changes that it will take to ward off the challenges?
In this episode, we debate the premise that Pat Gelsinger is going to save Intel.
Guests
Stephen Foskett Tim Crawford Gina Rosenthal Matt Bryson
Docker Only Cares About Developers Now
Jan 26, 2021
What direction is Docker going in now? Does it really only care about developers now?
There are over 10 million people out there in the world using Docker, a huge and fantastic community that has had had a lot of vendors involved.
About a year ago, Docker reached a realization; the company had become two companies combined. One focuses on production orchestration, selling to ops in a top-down model, and the other is developer-focused.
So what next?
Find out exactly what happened and if Docker is really just focused on developers now as Donnie Berkholz, VP of Product, Docker walks us through Docker in 2021.
Panelists
Calvin Hendryx-Parker Donnie Berkholz, Docker Larry Smith
Is Storageless Storage Just Someone Else’s Storage?
Jan 19, 2021
Storageless storage? What, in the name of oxymorons does that mean?
This new buzzphrase has hit the storage world and we need to figure out what it means, and whether this will be the next storage revolution. Storage companies do need to find a way to abstract the complexity that comes with storage and find new ways of delivering their services and platforms. Perhaps this is it.
This immediate need for storage simplification is driven primarily by the technology evolution that COVID-19 and remote work has forced on the tech industry. Many companies can no longer afford to wait and see on their storage, nor can they afford to cobble together something in the hope that it will work ‘for now’.
Storageless storage aims to reduce operational complexity, shrink operational expenditure, and provide a simpler, managed service that requires less in-house expertise – just like the cloud promised (and delivered on) all those years ago. However, there are a lot of questions that need to be asked around how it is going to work, what it actually means, and the compliance issues that may arise with utilizing remote storage.
Our Storage Field Day 21, will nail this topic (and more) down, answer your questions, and figure out exactly what storageless storage is from the companies that are driving this new platform.
Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett Date: 1/5/2021
Could anyone say they got their predictions mostly right in 2020? Even a little bit?
As we begin 2021, we take a moment to realize that no matter how straightforward things might seem right now, there is always something coming at us that could upset every potential idea we have.
Join a group of the brightest minds in the enterprise IT community to look at some of the trends and drivers that will shape the year to come.
Ask yourself, do predictions even matter in 2021? Or will this be the year when everyone gets it all right?
Participant
Keith Townsend Ned Bellavance Tim Crawford Stephen Foskett
Bug Bounty Programs are Just Legalized Bribery
Dec 15, 2020
Finding bugs in software isn’t new. Finding security bugs is just as old. Even the process of selling them to nefarious operators has history.
The rise of bug bounty programs is changing the economics of disclosure and patching, however. We’ve entered a new era of people trying to get top dollar for their investigations. The morality around it all is troubling.
In this episode, find out if bug bounty programs are just legalized bribery.
Bug bounty programs are legalized bribery. Or are they? Catch this podcast to learn more.
On-Premises for today’s roundtable:
Name
Jens Soeldner Pieter-Jan Nefkens Christopher Kusek
Traditional Security Models Don’t Work From Just Anywhere
Dec 08, 2020
As workers transition into the new state of working from their home office, what does that mean for our existing security technologies?
How does the enterprise model of firewalls and appliances work when no one is in the corporate offices? Does the cloud hold the key to our salvation? Or is this problem too much to deal with in a pandemic?
Join the community experts and a guest from VMware as we look at how traditional security models need to be tweaked to work from where we work now.
On-Premises for today’s roundtable:
Name
Rohan Naggi (VMware SME) Chris Grundemann Jason Benedicic
Docker has recently announced that they will rate limit image pulls from Docker Hub. These images are critical to modern infrastructure and have become critical to modern cloud infrastructure.
Just about every Kubernetes environment pulls dozens of images from Docker Hub, and even if this move doesn’t cause issues immediately it sends a message that the community needs to find another place to store images.
Maybe it’s a positive for the community to find another image repository. Maybe this will help Docker become a more supportable business within Mirantis in the future, but it does disrupt the cloud ecosystem. The big winner might be Microsoft, which could easily replace Docker Hub with GitHub and their other offerings, or it could be Amazon, Google, or even Red Hat.
Docker has reached out after listening to our podcast with the following clarifications and corrections that are important. Here are the facts from Docker:
In November 2019, Mirantis acquired the Docker Enterprise business only. Docker itself refocused back on developers, and Docker Hub and Docker Desktop remained with the company. As such, Mirantis is not involved nor making any decisions regarding the direction of Docker Hub or Docker itself – they only acquired Docker Enterprise.
Docker remains committed to supporting and growing the Open Source ecosystem. In fact, we published this blog post announcing the launch of a special program to expand our support for Open Source projects that use Docker. Eligible projects that meet the program’s requirements (i.e., they must be open source and non-commercial) can request to have their respective OSS namespaces whitelisted and see their data-storage and data-egress restrictions lifted. More than 80 non-profit organizations have joined the program.
The rate limiting only impacts a very small percentage of unauthenticated users. Based on these limits, we expected only 1.5% of daily unique IP addresses to be included — roughly 40,000 IPs in total, out of more than 2 million IPs that pull from Docker Hub every day. The other 98.5% of IPs using Docker Hub can carry on unaffected — or more likely, receive improved performance as the heaviest users decreased. More in this blog post and this one.
On Premis for Today’s Roundtable
Name
Joep Piscaer Larry Smith Calvin Hendryx-Parker Enrico Signoretti Moderator: Stephen Foskett
You Need Cloud Gateways to Transition to the Cloud
Nov 24, 2020
The world of the cloud is not your traditional enterprise IT space. The requirements are different for the way we access data and services. The tools we use to facilitate that also need to change.
Is there a perfect mix of on-premises hardware and cloud software that will enable our digital transformation to the cloud? In this episode, we will see if you need cloud gateways to transition to the cloud.
On-Premises for today’s roundtable:
Name
Craig Connors Paul Stringfellow Greg Stuart Moderator: Tom Hollingsworth
There’s Still No Viable Open Source Business Model
Nov 17, 2020
Over the last 30 years, we’ve seen the rise of open source software and companies trying to make money out of it. But what’s the business model for open source software? How can an open source project become your full-time job and how can you make money at it? There’s a related question for the consumers of open source software, who want technical support and want to incentivize the development of new capabilities. One would think that a viable business model would be developed over the decades, but has it?
On-Premises for today’s roundtable:
Name
Alex Ellis Larry Smith Calvin Hendryx-Parker Ather Beg Joep Piscaer Host: Stephen Foskett
Solving Networking From Home Challenges
Nov 10, 2020
VMware Spotlight Podcast
VMware Work From Anywhere Spotlight Podcast: The pandemic has created challenges for professionals working from home, not the least of which is networking. With so much technology focused on the enterprise the new branch seems to be neglected. In this episode, we speak with experts from the community and from VMware about how SD-WAN can play a key role in helping workers at home utilize their bandwidth and provide resilience in commodity connections.
Governments Trying to Disrupt Botnets is a Bad Idea
Nov 03, 2020
The Premise: Are Governments Disrupting Botnets a Bad Idea?
Botnets are becoming a huge part of Internet attack methods and data collection tools. The idea that thousands of machines can be used to create DDoS attacks or even impact legitimate services cannot be underestimated. But how much of a role should the government play in disrupting them? Is it right for your government to play such a big role in security? In this episode, we explore the idea that governments trying to disrupt botnets is a bad idea.
Although it is elastic, agile, and simple, enterprises have faced many challenges using cloud storage. It’s difficult to manage the cost of cloud storage, since it’s so easy to set up and run. And not all applications are able to access cloud storage directly, so gateways and translation layers are needed. There are so many options now, it can be a challenge even to understand them let alone use them properly. The focus should be on selecting and managing the right type of storage for the job, not simply moving data to the cloud.
Enterprise Networking Has No Need for IPv6 – The On-Premise IT Roundtable
Oct 20, 2020
The Premise: Does enterprise networking need IPv6?
Enterprise networks have been on the verge of address exhaustion for years. The promise that was once the utopia of unlimited address space with IPv6 has seemed to disappear with barely a whimper. Our networks seem to be running just fine on IPv4 for the foreseeable future. In this episode, we discuss whether or not enterprise networking needs IPv6.
Object Storage is the Future of Primary Storage
Oct 13, 2020
PURE STORAGE SPOTLIGHT PODCAST featuring Rajiev Rajavasireddy, Vice President Of Product Management.
The Premise: Is object storage the future of primary storage?
On-premises object storage has traditionally been associated with archive data. Scale and density were important, but the performance was not a major consideration. Fast object storage was an oxymoron. That paradigm is changing. DevOps teams across enterprises are now building container-based applications leveraging S3. High-performance applications are being designed to bring the capabilities and benefits of S3 and cloud-optimized architectures to on-premises environments. Vertica in Eon mode and Splunk SmartStore are a couple of examples highlighting this trend. Will this trend become more prevalent? Will a majority of new high-performance applications be designed with object storage? Will object storage become the future of primary storage? Tune in to this podcast to find out.
Companies are Not Inherently Good or Evil
Oct 06, 2020
The Premise: Are companies inherently good or evil?
We all have the idea that there are companies that are good and companies that are bad. Some do kind things for their communities and others seek to take as much as possible. But are the companies really at fault here? Or is this behavior being driven by people behind the scenes? In this episode, we set out to prove that companies aren’t inherently bad but people can be.
Working From Home Isn’t All That Great
Sep 22, 2020
The Premise: Is working from home all that it’s cracked up to be?
The current state of the world has forced us all to work from home. It’s what we’ve always wanted, right? Freedom to hang out in our PJs all day and no travel to the office. Yet, the reality of working from home is less appealing when you get into the details. For many, working from home isn’t all that great. In this podcast, we’ll explain why and try and find some solutions to the issue.
NGINX Spotlight: We live in an increasingly interconnected world that hinges on responsive APIs. It is becoming a standard way for us to interface not only with software, but with devices that run that software. We are trying to do magical things with APIs now, yet not all APIs are created equal. As we develop more and more around APIs and their ability to be accessed, we’re finding some limitations, which means we have to develop new thoughts and new ways of handling things. One of the latest is real-time API, which can process an API call end-to-end in under 30 milliseconds. Sound too good to be true? Sound too good to be true? Our panel will dig in to find out the answer to this question: Are all APIs created equal?
Facts are facts, but that doesn’t mean that your data is telling you the whole story. Data experts know that biases crop up any time we look at data, and this cannot be avoided. That’s why we need to be sensitive to these biases when making judgments and when planning data collection exercises. We apply these concepts to everything from sales metrics to the COVID pandemic to government surveillance. This episode reunites our social justice panel, including Leon Adato, Josh Fidel, Karen Lopez, and Phoummala Schmitt, with moderator Stephen Foskett.
You Should Roll Your Own Whitebox Storage
Aug 25, 2020
The Premise: Should you roll your own storage system?
As enterprise storage moves to commodity hardware and software, has the time come for businesses to consider building their own storage systems? In this podcast, we discuss the pros and cons of packaged enterprise storage solutions and consider how the market is changing. Many modern storage solutions, from VMware vSAN to cloud storage, are based on commodity hardware. And there are many instances where a production storage environment uses self-configured software too. But we still recommend sticking with a supported storage solution.
Wi-Fi Isn’t Always the Best Wireless Solution – The On-Premise IT Roundtable
Aug 11, 2020
The Premise: Is Wi-Fi the Best Wireless Solution?
In a highly-connected world, we rely on our technology to stay online at all times. Almost everything uses wireless today, right? But what kind of wireless does it use? You might be tempted to say Wi-Fi, but as it turns out, WI-Fi isn’t always the best wireless solution for your needs.
IoT Doesn’t Need Wi-Fi 6E – The On-Premise IT Roundtable
Jul 28, 2020
The Premise: Does IoT Need Wi-Fi 6E?
The standards for faster wireless communications are being adopted much more quickly. We have faster speeds and different spectrum allocations to work with. However, the largest consumers of the wireless space in the coming years are the ones that won’t be able to use those faster new speeds without a radical shift in direction. IoT devices use slower, older, cheaper radios. So, the premise of this episode is that IoT Doesn’t Need Wi-Fi 6E.
Disaggregated Architecture Runs Best with Enterprise Hybrid Cloud
Jul 21, 2020
The Premise: Does disaggregated architecture run best with enterprise hybrid cloud?
Pure Storage Spotlight In this episode, we are taking on enterprise cloud architecture, and specifically, the move towards disaggregated infrastructure. As enterprises move towards cloud architecture, a lot of companies are jumping in. Software solutions like VMware Cloud Foundation, for instance, drive the ability of enterprises to build a hybrid cloud. But there has been a lot of talk about architecture as well. The industry has seen a lot of changes in the last several years. There was a move towards converged, then hyperconverged, and now disaggregated architecture offers benefits for an optimized hybrid cloud (for example, Pure Storage and Cisco’s FlashStack). Is disaggregated the right move?
A Single Source of Truth Can Bring Justice to the World
Jul 14, 2020
The Premise: Could a single source of truth help bring justice to the world?
Continuing our effort to reconcile the perspective of enterprise IT nerds with the social upheaval happening all around us, the On-Premise IT Roundtable Podcast guests discuss the concept of a single source of truth. In IT analytics, we believe that one system or application has “truth” even if others disagree. Is this true of the wider world as well? What is the difference between data, fact, and truth and how can we leverage this concept to bring justice to the world?
IT Observability Principles Can Help Bring Justice to the World
Jul 07, 2020
The Premise: Could IT observability principles help bring justice to the world?
Technical people love clarity, facts, transparency, and data, but the world outside the datacenter is anything but. How can we in tech apply ourselves and our experience to promote justice and transparency in the real world? Why don’t governments, media, and the population at large know the facts about crises like the Coronavirus pandemic and the racial justice issues that are shaking the United States and the world? And what can we do to help improve the situation?
AI Can’t Do Much for WiFi – The On-Premise IT Roundtable
Jun 30, 2020
The Premise: Does artificial intelligence do anything for WiFi?
Today’s world is driven by software. Applications rule the tech space and they are increasingly relying on machine learning (ML) and artificial intelligence (AI) to make better, faster decisions. But there are some technologies that don’t really take full advantage of AI. Can an algorithm replace a person troubleshooting? Is something at is as much art as science able to leverage the power of a computer to get the job done? Is it true that AI can’t do much for Wi-Fi?
Orchestration is the Reason Enterprises Haven’t Adopted Containers – The On-Premise IT Roundtable
Jun 23, 2020
The Premise: Orchestration is the reason enterprises haven’t adopted containers.
Pure Storage Spotlight Orchestration is almost synonymous with containers nowadays. One popular opinion is that maybe the reason enterprises haven’t adopted containers quickly is that people are hungry for a way to orchestrate containers that fits into their overall environment. Is this the reason why containers have yet to be widely adopted? Follow this discussion, featuring Jon Owings – Principal Solution Architect at Pure Storage and Cormac Hogan – Director and Chief Technologist at VMware, and decide for yourself.
Encryption is Ruining Network Security – The On-Premise IT Roundtable
Jun 16, 2020
The Premise: There used to be a time when the Internet was free, wild, and nothing was encrypted. With the advent of things like SSL and TLS, some network analytics and security folks are wondering: is encryption ruining security?
The world is encrypted today. Our traffic is being protected from beginning to end so our identities and data are safe. But how safe are we in reality? What about the traffic that we want to see? How can we protect against threats when everything is using TLS to hide from our tools? Is there a solution to figuring out how to see the unseen?
Anomaly Detection is the Only Good ML in the Enterprise – The On-Premise IT Roundtable
Jun 02, 2020
The Premise: There are a bunch of new AI/ML startups, and we hear about the latest advances and interesting concepts, but it tends to fall down when it comes to implementation. Is there any good use for AI in the enterprise?
Artificial Intelligence and Machine Learning are extending the capabilities of our technology at a rapid pace. Or so we have been led to believe. But what is ML really giving us? What good is it to our enterprise? Is there a use case that shines above all others? Or is it just marketing fluff? This episode of the On-Premise IT Roundtable discusses how Anomaly Detection is the only good use of ML in the Enterprise.
vSphere 7 Means NVMe-oF is Ready for Prime Time – The On-Premise IT Roundtable
May 26, 2020
The Premise: With the release of vSphere 7, there are some new storage features and interoperability, but is NVMe-oF ready for prime time?
Pure Storage Spotlight Those of us who have been following storage and server architecture are pretty excited about NVMe and various things over fabrics. So far, it’s been a challenge to bring NVMe-oF into the enterprise datacenter. Does the release of vSphere 7 mean that this technology is finally ready for prime time? Follow the discussion and decide for yourself.
Virtual Events Are Good Enough – The On-Premise IT Roundtable
May 19, 2020
The Premise: As the world has changed, we’ve started to see virtual events become the norm. But are virtual events just as good as in-person events?
Lots of us are experiencing the joy of working from home right now and many people are also experiencing the fun of virtual events. How are these events working?
Results from a quick non-official poll: “I am never leaving my house again!” “I despise virtual conferences.” “An in-person interaction can’t bring more to the table than my computer screen and I can bring (to the kitchen table).” “Events are a thing of the past and are being replaced with the almighty virtual option.”
Who’s right? Join us as we’ll discuss the various ways in which we can weigh a virtual vs in-person event, and why there are some things that just can’t be replaced.
IT Isn’t Really “That” Broken – The On-Premise IT Roundtable
May 12, 2020
The Premise: With the number of computer performance issues reported by consumers each day, one begins to wonder is IT really broken?
Computers never work. I can’t check my email. The Internet is always slow. No matter what I’m doing it seems like Information Technology is never the way I want it. Everything is completely broken. Or is it? Join us as we’ll discuss the various ways in which IT can be operating properly but still not as users want and why there is a huge difference between slow and broken.
Feature-Based Licensing for Infrastructure is a Good Thing
May 05, 2020
The Premise: Software companies have found a way to offset development costs, but is charging customers for features a good thing?
Modern technology has focused on delivering value. The current shift away from hardware to software means that companies need to recognize how they deliver that value. In order to ensure they are responding to their customers’ needs in the best possible way, they need to charge appropriately for the features that are being used. So, we ask the panel today, if feature-based licensing for infrastructure a good thing?
Enterprise AI is a Bunch of BS – The On-Premise IT Roundtable
Apr 28, 2020
The Premise: There is so much talk about artificial intelligence and machine learning from many enterprise IT companies lately, and the general feeling is that a lot of it is just BS.
Enterprise AI seems to be a buzzword we are having trouble escaping. Is it following us, or just getting smart enough to know the paths we tread to shake it? The roundtable faces this pursuer and decides if we should take away its boastful title, call it something else, or if we have crossed the threshold far enough for it to lay claim to its moniker. Is Enterprise AI a bunch of BS? Follow the discussion and decide for yourself.
COVID-19 Is A Security Disaster – The On-Premise IT Roundtable
Apr 21, 2020
The Premise: With more people working from home, organizations need to focus on both security and humanity during this COVID-19 pandemic.
Security will always be a bit of a treadmill that organizations need to keep moving on. But that treadmill got kicked into overdrive as a result of COVID-19. Organizations that seemingly had well-implemented security policies in place now have to account for everyone working from home. This changes the threat surface from devices used, to how traffic is routed, and often new services being brought online. So will COVID-19 be remembered as a security nightmare? The roundtable discusses the implications and why companies need to be focused on the humanity in their midst in order to stay secure.
Physical Distancing Isn’t Possible for IT Support – The On-Premise IT Roundtable
Apr 07, 2020
The Premise: The demands of IT support make physical distancing not practical in all situations.
With many people working from home as a result of the COVID-19 pandemic, IT is faced with challenges of how it can fulfill its support mission while maintaining a safe physical distance. Certainly the proliferation of cloud services has changed the landscape considerably for a lot of organizations, removing some of the infrastructure that would otherwise need to be maintained by the organization. But cloud data centers, automated though they may largely be, still require physical footprints, and other support becomes much harder, if not impossible, when maintaining pandemic distancing. In this episode, the roundtable discusses how they’ve changed their IT support as a result of the pandemic, if it’s possible to maintain distances while still providing effective IT, and what support challenges lay ahead as week of self-isolation turn into months.
Commodity Broadband is Inferior to MPLS – The On-Premise IT Roundtable
Mar 24, 2020
By now, we’re all familiar with software-defined wide area network or SD-WAN. SD-WAN enables the use of multiple circuit types, including both MPLS and commodity broadband. Everyone knows how reliable MPLS can be. Can broadband reach that level of assurance? Given the history of using the technology with enterprise networks, our panel of experts debates the premise that commodity broadband is inferior to MPLS.
Backup is a Security Hole – The On-Premise IT Roundtable
Mar 10, 2020
Backing up data is standard practice and one that both companies and individuals take part in regularly. Having your data at your fingertips to be able to restore any potential loss and keep your forward movement is a must in all of our quick moving industries. How do you make sure that your backup protocols are taking into account proper security measures? How do you know if encryption is taking place? Is malware slipping into your snapshots and being replicated? Do your backups include data access that you shouldn’t have? With every copy of your data being a potential security risk, the question we tackle in this conversation at the On-Premise IT Roundtable is: Is backup a security hole?
Single Pane of Glass is a Myth – The On-Premise IT Roundtable
Feb 25, 2020
The Premise: Single Pane of Glass is a Myth
If you use network monitoring software or SIMS, you’re probably used to having a ton of browser windows open at any given time because there are many aspects to the tools you need to get info from. Looking over your shoulder, a team member might tell you that they have the solution to the “many open browser” issue, and it’s four magical words: Single Pane of Glass. From a user’s perspective, it can be argued that single pane of glass doesn’t exist at all, and vendors who push this idea are never looking beyond their product. When users look at heterogeneous networks, there may be a single pane of glass for this and a single pane of glass for that, which ends up being 25+ panes of glass and not really solving anyone’s problems. So how do we reconcile this chasm between the user and vendor perspectives? The question we tackle in this On-Premise IT Roundtable is: Is there such a thing as an all-in-one solution or is “single pane of glass” simply a myth?
Is Backup Dead? – The On-Premise IT Roundtable
Feb 11, 2020
The Premise: Is Backup Dead?
Data protection used to be pretty straightforward. In recent years, there have been a number of changes in enterprise backup. It’s not necessarily that backup has changed, but systems and people have changed. In fact, many small and medium-sized businesses don’t even have servers anymore. Now, we have different applications and different infrastructure, and we have to adjust our processes to accommodate new systems. In this episode, we’re talking about backup… specifically, the death of backup. If backup has no business value, is out of touch with the times, or doesn’t exist anymore altogether, then what does data protection and recovery look like today? The question we tackle in this On-Premise IT Roundtable is: Is backup dead?
Hadoop is Dead – The On-Premise IT Roundtable
Feb 05, 2020
The Premise: Hadoop is Dead.
Big data is on everyone’s mind across IT, and the storage industry is no exception. For a while, Hadoop seemed ready to conquer the world with its promise of reliable, scalable, distributed computing. However the tide has seemingly turned away from the once ubiquitous yellow pachyderm. Big data is very much alive, but the roundtable discusses if the complexities inherent in the Hadoop stack mean it’s fated for an untimely demise. Or will the still increasing investments in Hadoop by some customers keep it in the big data discussion for some time to come? And if Hadoop really is dead, are there any pieces that can find some new life in IT? Find out in this episode.
This episode is sponsored by Pure Storage. Click here for more information.
Toxic People Are Unavoidable in IT- The On-Premise IT Roundtable
Jan 28, 2020
The Premise: Working with toxic individuals in unavoidable in IT.
In this roundtable, Tom Hollingsworth discusses whether it’s fundamentally unavoidable to work with toxic people in IT. First the panelists define what we mean by toxic in an IT context. Then they dig into why IT seems to have its fair share of people with toxic characteristics, and why the focus should be on the relationship between individuals, rather than singling out one party as being the problem. From there, they dig into how to work with such people, when too much is enough, and how to perhaps avoid falling into the trap of toxicity yourself.
Wi-Fi Monetization is Bad – The On-Premise IT Roundtable
Jan 14, 2020
The Premise: Wi-Fi monetization is bad, it should be free, frictionless, and fast.
In this roundtable, Tom Hollingsworth leads a discussion about the premise that Wi-Fi monetization is bad. Some would argue that it’s evil. If venues and businesses want to offer Wi-Fi, it should be treated the same way other utilities are. These all require a degree of expense to the business, but aren’t added on as charges to customers. Keith Parsons uses the free, frictionless, and fast standard. Does that mean that everyone should offer free Wi-Fi all the time? And how does that fit into an organizations larger IT policy framework. The roundtable makes the case and digs into the details in this episode.
IT Certifications Are More Valuable Than A College Degree – The On-Premise IT Roundtable
Dec 24, 2019
The Premise: IT Certifications are more valuable than a college degree.
Odds are that if you’ve been in IT for a while, you’ve been asked how many certifications you have. There’s no doubt that these are valuable. Yet many IT pros still feel that they need a college degree to hang on the wall. The roundtable discusses if this is a legacy of times gone by, or if a college degree still holds a more important place than certifications. The panel includes a wide range of experiences, with IT careers build with and without degrees, as well as someone currently in college pursuing an IT career. It’s a great conversation!
The Promise of the Cloud Cannot Be Achieved – The On-Premise IT Roundtable
Dec 17, 2019
The Premise: The Promise of the Cloud Cannot Be Achieved.
We know that The Cloud is a real thing. But of the many things called The Cloud, each of them is remarkably different. Features, capabilities, functions vary wildly between them. Every organization is scrambling to figure out how to use the cloud, but is the promise of the cloud simply unachievable? Does the pursuit of multi-cloud mean that organizations must ignore whatever makes a cloud special, and turn it into simply someone else’s infrastructure? The roundtable discusses in this episode.
This episode is sponsored by NetApp. Click here for more information.
Digital Transformation is a Myth – The On-Premise IT Roundtable
Dec 04, 2019
The Premise: Digital transformation is a myth.
There’s a lot of talk about digital transformation, but are organizations actually achieving that, or are they simply changing IT practices to keep up with changing infrastructure. Should we even view digital transformation into a means in and of itself. And can non-digital companies actually transform, or are industries just going to replace obsolete players over time? In this episode, the roundtable discusses a lot of the nuance often lost in grand visions of digital transformation.
BONUS: The Origins of Tech Field Day – The On-Premise IT Roundtable
Nov 22, 2019
In this bonus episode, we’re joining Stephen Foskett as he talks with some of the original delegates and inspirations for the Tech Field Day event series. They discuss the event that gave Stephen the initial idea, a fortuitous plane ride, how the first Tech Field Day event went, where the idea for the live stream started and more. It’s a great conversation and we couldn’t think of a better way to celebrate the 10th anniversary of the event.
Simplification Adds Risk – The On-Premise IT Roundtable
Nov 19, 2019
The Premise: Simplification inherently adds risk to an IT system.
Simplification may sound great and improve efficiency, but it always with it an increase in risk. This is because by abstracting away the complexity, you’re also hiding potential faults in the system. The roundtable discusses if this is true, and if there’s a way to lose some of the geek knobs without creating a risky environment.
The Administrative Hurdle of IPv6 – The On-Premise IT Roundtable
Nov 05, 2019
The Premise: The biggest hurdle to IPv6 adoption is administrative.
IPv6 is the next big thing in networking, it’s going to solve all of our network addressing issues. At least, that’s what it’s been promising for the last two decades. So why hasn’t it lived up to the hype? The roundtable discusses the idea that administration is the biggest holdup to overall IPv6 adoption. Be sure to listen to figure out how we can get the the bright, shiny, happy place that is IPv6.
Storage: You Gotta Keep ’em Separated – The On-Premise IT Roundtable
Oct 22, 2019
The Premise: You should never put primary and secondary storage on the same system.
It’s almost canonical wisdom is storage that you shouldn’t put primary and secondary storage on the same storage system. Doing otherwise is just asking or trouble. But given the rapidly changing IT landscape and the emergence of the cloud, is that really true anymore? The roundtable breaks it down in this spirited discussion.
Learning Kubernetes is a Waste of Time – The On-Premise IT Roundtable
Oct 08, 2019
The Premise: With the advent of managed services, learning Kubernetes is a waste of time.
On this episode, our roundtable discusses the premise that learning Kubernetes is a waste of time. With so many managed Kubernetes service available, actually learning the ins and outs of the obtuse orchestrator isn’t necessary for the vast majority of organizations. They discuss the actual business value of managing Kubernetes, compare it to learning vSphere, and discuss what organizations should be investing time in.
Security Can’t Keep Up – The On-Premise IT Roundtable
Sep 24, 2019
The Premise: In recent years, the velocity and sophistication of malicious hacks have accelerated beyond the capability for modern IT security to keep up.
You don’t have to follow the news very closely to find evidence of large scale security breaches. The sophistication, breadth, and sheer velocity of malicious hacks have reached a point that IT security simply can’t keep up like it used to. The roundtable debates this subject, if the situation is truly hopeless, and how organizations can take a modern approach to IT security.
The Cloud Should Adapt to the Enterprise – The On-Premise IT Roundtable
Sep 10, 2019
The Premise: Public cloud providers should adapt to the needs of the enterprise, not the other way around.
It would be great if all our applications were cloud native to get the best cost, resilency, and architecture overall. But enterprises don’t move that quickly. The cloud should offer services that work for existing applications that organizations want to get out of the data center but aren’t going to refactor any time soon. The roundtable discusses the merits and why this isn’t happening right now.
Redesigning is Useless in Wireless – The On-Premise IT Roundtable
Aug 27, 2019
The Premise: When it comes to wireless, redesigns are useless.
Redesigns in wireless are done more for compulsive than technical need. When a new access point comes out, the entire wireless network doesn’t need a redesign, other than to satisfy the need to tinker for those managing them. We discuss if and when a redesign is actually needed, why you need to consider what’s driving your wireless refresh in the decision, and how to put a monetary value on defining a “pointless” redesign.
SaaS Backup Isn’t My Problem – The On-Premise IT Roundtable
Aug 13, 2019
The Premise: Backing up SaaS apps isn’t my problem, the cloud provider should handle it.
We all know how traditional backup work, but SaaS is different. Since the software comes as a service, backup is just one of those services, right? The roundtable discusses this idea. Do current SaaS offering really provide backup? If they don’t, should that even be their responsibility? And should you always want to be doing your own backup anyway? This was a really great discussion to get you thinking on the topic.
The Traditional Office is Dying – The On-Premise IT Roundtable
Jul 09, 2019
The Premise: The traditional office will be dead in the next 5-10 years.
The traditional office is dying. Since the rise of telecommuting in the 90s, less and less people need to be in the office. With open offices killing productivity, in the near term, we’re going to see the traditional office become extinct. The roundtable debates how true this is, and what makes it worth it for a lot of organizations to still keep the office lights on.
VARs are Useless – The On-Premise IT Roundtable
Jun 25, 2019
The Premise: VARs are useless.
Value-added resellers (VARs) are often characterized as useless, adding a needless cost for something that should be sold direct to customers. In this On-Premise IT Roundtable, the panel discusses where the value actually gets added and what benefits VARs can still provide. It was an interesting discussion with a lot of different perspectives.
IoT Is Making Society Less Secure – The On-Premise IT Roundtable
Jun 11, 2019
The Premise: By proliferating the number of devices on our networks, IoT is making our society less secure.
On this episode, the roundtable discusses if IoT is making us less secure overall. They get into a discussion of what kind of attack surfaces IoT presents, whether these device impact privacy more than security, and why current IoT is based on a “no support” model.
Multi-Cloud Is A Fad – The On-Premise IT Roundtable
May 28, 2019
The Premise: The push for multi-cloud is driven by vendors and analysts, not by an actual IT need.
On this episode, the roundtable discusses if the framing of multi-cloud as an inevitable IT outcome is really accurate. Is multi-cloud just something being pushed by analysts and vendors with solutions to sell? If so, will it ultimately be a fad? They further discuss what they mean when they say multi-cloud, which further clarifies the premise.
You’re Wrong About Data Protection Policy – The On-Premise IT Roundtable
May 14, 2019
The Premise: Data protection policy isn’t defined by business need or IT capability, rather than an inherited set of traditions and superstitions.
On this episode, the roundtable discusses data protection policy. The premise is that most organizations are doing this wrong. There’s a fundamental misalignment between what IT thinks it needs to be doing and what the business needs for operations and compliance. They discuss who needs to be taking ownership of these policies, how storage vendors are partially responsible, and how to move forward.
Bringing Yourself to Work – The On-Premise IT Roundtable
Apr 30, 2019
The Premise: There’s value to the business in bringing your personal interests to work in IT.
Today’s show discusses when you can bring your personal life into IT. We discuss if doing so is just a way to reduce burnout, or if there is legitimate business value to be found. We touch on how to approach supposed “third rail” topics and more.
The Cloud is Going to Disappear – The On-Premise IT Roundtable
Apr 16, 2019
The Premise: People don’t want cloud, they want what it does.
Today’s episode considers if people want cloud, or what the cloud actually does. In this case, we’re looking at if a focus on providing services will eventually make the cloud irrelevant, since people don’t really care about it. Or have the cloud providers created sufficient value-add services to solve business problems that make the cloud itself relevant, not just API-driven functions.
Network Analytics Is Too Expensive – The On-Premise IT Roundtable
Apr 02, 2019
The Premise: Network analytics is too expensive for the modern enterprise.
Thanks to the growth of software-defined networking, a lot of network information that used to be unknown, is now known. But in order to get that information out of the network, you have to spend a lot of money on specialized hardware, software, and talent to program it all. Is it beyond the reach of most enterprises? Or is the cost of not knowing always greater? The roundtable discusses.
Microsoft Is Done With Windows – The On-Premise IT Roundtable
Mar 19, 2019
The Premise: With the current corporate vision, Microsoft is done with Windows.
Declaring the Death of Windows is always a great way to drum up some clicks. But today, the roundtable discusses whether Windows is just kind of beside the point for a modern Microsoft. The debate whether this means the end of Windows, the end of the beginning of the end of Windows, or just that Windows’ role in Microsoft will fundamentally change.
Change Your Password All The Time – The On-Premise IT Roundtable
Mar 05, 2019
The Premise: The best way to keep passwords secure is to change them all the time.
Changing your passwords frequently is the best way to keep accounts secure, right? Or does frequently changing passwords cause users to lean on easily predictable patterns that ultimately make things less secure? The roundtable discusses what the best approach is, whether two-factor authentication changes your approach, and what changes when considering personal vs organizational passwords.
The Storage Array is Dead – The On-Premise IT Roundtable
Feb 19, 2019
The Premise: The storage array is dead, dead as a doornail.
The monolithic storage array used to be the standard of storage, but it’s time has come and gone… or has it? The roundtable discusses what specifically we mean when we talk about storage arrays, why they are increasingly irrelevant, and if their decline is permanent, or a temporary reaction to recent IT trends.
You Need Sensors for Analytics – The On-Premise IT Roundtable
Feb 05, 2019
The Premise: In order to be an effective analytics company, you need a sensor.
Tom Hollingsworth leads a discussion around how important sensors are for analytics and data. Is network monitoring enough? What about something in software? Or is the added expense of a dedicated out-of-band physical sensor the price you have to pay? The roundtable is pretty evenly split on the subject, and discusses where each approach works best.
5G will Replace Traditional Networks – The On-Premise IT Roundtable
Jan 22, 2019
The Premise: 5G is set to replace traditional networking.
In this episode, the roundtable discusses what impact 5G will have on traditional networking. They dig into why wireless is a more finite resource than wired networking, the difficulty of service degradation, and how to justify rolling out 5G for fixed end points.
Composable Infrastructure is Just Blade Server 2.0 – The On-Premise IT Roundtable
Jan 08, 2019
Premise: Composable Infrastructure is just another iteration on blade servers.
Stephen Foskett leads a discussion about how big of a change composable infrastructure is from the tried and true blade server.
CI sounds like a great idea. It offers infrastructure that’s dynamic, reconfigurable by software, and has full API integration. The roundtable discusses a little history of the ideas behind composable infrastructure, and how CI can develop into something truly unique.
You Shouldn’t Run Your Own Website – On-Premise IT Roundtable
Dec 04, 2018
Stephen Foskett and the Roundtable discuss whether or not you should be running your own website. They weigh the pros and cons of each path and delve into the specifics.
It Doesn’t Matter Where Your Data is Stored – The On-Premise IT Roundtable
Oct 02, 2018
The Premise: In our automated and disaggregated world, it doesn’t matter where your data is stored.
The roundtable discusses if data locality is important to storage administrators anymore. They discuss why it might matter for technological, regulatory, and organizational needs, and how those needs have changed over time.
Enjoy this bonus IT Origins interview in the feed this week, we spoke with Senior Cloud Ops Advocate, podcaster, and oenophile, Phoummala Schmitt. We discuss how she came into an IT career from the fashion industry, why we’re already living in a multi-cloud world, and when high availability in the cloud goes beyond a SLA.
Networking Disaggregation Isn’t Ready – The On-Premise IT Roundtable
Sep 18, 2018
The Premise: Networking disaggregation is not ready for the enterprise.
The panel discusses where networking disaggregation is relevant in today’s IT. Is it limited to just the largest organizations, or can even small IT teams enjoy its benefits? Or is scale less important than how an organization values the network itself?
For this bonus podcast episode, we had the privilege to speak to Dremio CEO Tomer Shiran. We discussed how he got his start in IT, compare the tech scenes between Israel and Silicon Valley, and look at the value of over delivering in your career. They also discuss the importance of coffee in being an entrepreneur. It was a great discussion, enjoy!
This podcast is sponsored by SolarWinds. Be sure to check out their new Tech Publication, Orange Matter, to learn more about the other SolarWinds Head Geeks.
Leon Adato is a Head Geek at SolarWinds.
This week on the podcast, we have an interview with SolarWinds Head Geek Leon Adato, recorded on-premises from our lovely Hudson, Ohio offices. We discussed how Leon went from Theater major to working in tech education, what exactly a head geek does, and finishes with some great career advice.
A full transcript of the interview is available here.
Table of Contents
0-0:40: Host intro 0:40 – 10:10: IT Origins Story 10:10 – 12:25: What is a Head Geek? 12:25 – 15:24: When Did Single Pane of Glass Enter the IT Lexicon 15:24 – 22:58: Biggest Change Since You Started Your Career 22:58 – 27:05: Current Worst Trend in IT 27:05 – 32:45: Current Best Trend in IT 32:45 – 38:57: Where is IT Going in the Next 3-5 years? 38:57 – 44:23: Book Recommendations:
44:23 – 46:43: First Computer You Owned 46:43 – 48:06: What Do You Do When You’re Not Working in IT? 48:06 – 50:03: How Do You CXaffeine? 50:03 – 50:40: Who Do You Want to See on IT Origins? 50:40: Career Advice
IT Burnout is Inevitable – The On-Premise IT Roundtable
Jun 26, 2018
Premise: IT burnout is simply unavoidable. It’s a part of the gig.
The discussion: Is burnout an inevitable part of IT? Is it part of the way IT roles are created? Maybe it says something about the types of people attracted to IT. Or maybe it has something to do with the incentives that causes work to become burnout. Our roundtable discusses why they’ve seen burnout happen and how they cope with IT stress to avoid or mitigate it.
Revisited: Security is a Dumpster Fire – The On-Premise IT Roundtable
Jun 12, 2018
With Cisco Live US happening this week in Orlando, we decided to share a throwback episode featuring Rob Rodgers and Mils Swart of Skyport Systems. Skyport was recently acquired by Cisco in February. It’ll be interesting to see how Cisco uses this talent to address the security concerns raised in this episode.
This week on IT Origins, we had a conversation with Ted Dunning, Apache Software Foundation board member, and the Chief Application Architect at MapR. We discussed Ted’s introduction to IT, his early involvement with the open source software community, and how AI advances quickly go from aspirational to blasé. We were also fortunate to have Ted’s colleague and co-author Ellen Friedman join in on the second half of the interview. Both were able to give some great career advice about how to stay relevant in rapidly evolving fields.
Books mentioned in the interview by Ted Dunning and Ellen Friedman:
Enterprise AI Is Just a Buzzword – The On-Premise IT Roundtable
May 29, 2018
The Premise: Enterprise AI is a buzzword slapped onto products and services without any technical merit.
Let’s face it, AI gets thrown around a lot in the enterprise these days. It often gets conflated with Machine Learning, Deep Learning, and neural networks. But does the term actually mean anything? Are there solutions out there that actually qualify as AI? The roundtable debates.
This week on IT Origins, we had the privilege speak to Patric Palm, the Co-Founder and CEO of Favro. We discussed how he started the company, his previous startup efforts, the importance of adaptability, and his background in organization and process.
Automation Will Kill Engineering Jobs – The On-Premise IT Roundtable
May 15, 2018
The Premise: The efficiencies of network engineering will decimate engineering jobs.
The panel debates if this is true. They look at if this will happen across the board, if engineers will just become programmers going forward, or if automation will actually benefit network engineers down the road. And if automation does eliminate all these jobs, does it then become a pernicious form of support lock-in?
All Your Networking Are Belong to NFV | The On-Premise IT Roundtable
Apr 24, 2018
On this episode, we’re discussing Network Functions Virtualization, aka NFV. The roundtable discusses what exactly NFV is, how it differs from SDN, and if it’s going to eat all specialized networking hardware. The discussion then turns into how changes in network design principles als make NFV even more viable in the enterprise.
An episode so nice, we share it twice. Throwing in a bonus episode to the feed. Had a great conversation with Karen Lopez for IT Origins. If you haven’t already, be sure to listen to her appearance on the podcast with What is Big Data?
Karen Lopez is a Senior Project Manager and Data Architect.
I had the privilege to talk to Data Architect Karen Lopez for this week’s IT Origins interview. We discuss how data hasn’t changed all that much since she started her career, but our ways of relating to it have. We also discussed the best and worst IT trends, got some book recommendations for the world, and walked away with some career advice.
You Should Care Where SaaS Lives | On-Premise IT Roundtable
Apr 10, 2018
On this roundtable, we’re getting cloudy. The panelists discuss why it matters where your SaaS apps live, and not just depend on an SLA. This can impact not just business continuity and customer experience, but security and compliance as well.
I had the privilege to talk to Zachary Smith from Packet in our most recent IT Origins interview. We discussed how he went from majoring in the double bass at Juilliard to becoming an IT entrepreneur. We also discussed his two months in the Boy Scouts, the hip Australian coffee, and the benefits of innovating on the software layer.
Your Notifications Stink! The On-Premise IT Roundtable
Mar 27, 2018
Let’s face it, your alerts stink. If you’re finding lifehacks to deal with the amount of notifications you’re receiving, you’ve already lost the battle. On this episode of the podcast, the roundtable discusses why we’re drowning in notifications, how to better approach it, and why can’t we actually get actionable alerts.
Painful IT Language – The On-Premise IT Roundtable
Mar 13, 2018
The podcast that inspired it all! Today we’re sharing the pilot episode of the On-Premise IT Roundtable, looking at terrible IT language. Does the name of our podcast drive you nuts? Do you cringe when someone asks you to “double-click” in conversation? Do you have opinions on how to pronounce BPDU? This is the episode for you.
Enjoy this bonus IT Origins interview episode. Our regularly schedule podcast will post March 13, 2018.
Dong Ngo is an IT consultant and writer at Dong Knows Tech. From 1999 through 2017, he was an Editor at CNET.com, covering the storage and networking beats.
This interview provided some fascinating perspective into what we assume is significant technology. Dong shares his journey from a small village in Vietnam to moving to San Francisco in the 1990s. After listening to his interview, be sure to check out this 2011 piece from Dong about revisiting his hometown.
Words Don’t Mean Things After All! The On-Premise IT Roundtable
Feb 27, 2018
Do words mean things? It depends on who you ask. Often the more technically minded IT folks like hard and fast definitions, while marketing tends to lend to a more “generous” interpretation of words. Do we need to rigidly enforce definitions, or are we resigned to an infinite regress into mutual unintelligibility? We’re no stranger to this debate on Gestalt IT, but the panel sheds new light and perspective on this often frustrating premise.
The IT Differentiation Dilemma – The On-Premise IT Roundtable
Feb 13, 2018
We dug back in the On-Premise IT Roundtable archives to bring you an episode originally recorded in 2016, but incredibly prescient today. The roundtable discusses how IT companies can differentiate in an age of increasing commoditization. They look at examples like DSSD, Kaminario, and SimpliVity as ways to differentiate hardware, albeit at a considerable expense of time and resources. They then turn to software, and discuss the wave of SDS products that turned out to be features. The discussion is fascinating because many of the trends identified in this discussion have now played out in one form or another.
I had the privilege to talk to Matt Leib about how he got his start in IT, how the industry has changed since his Radio Shack days, and why the hybrid cloud is here to stay. It was a great conversation, enjoy the audio!
Licensing Models Matter- The On-Premise IT Roundtable
Jan 16, 2018
The On-Premise IT Roundtable has a bold premise for this episode: enterprise licensing models are interesting! The panel discusses why understanding licensing is vital for a modern data center as we move from CapEx to OpEx models.
BONUS Interview Episode: Allison Sheridan – IT Origins
Jan 04, 2018
Since we recorded this great interview for IT Origins, we’re including it in the On-Premise IT Roundtable feed.
Allison Sheridan is perhaps best known for Podfeet Podcasts, Technology Geek Podcasts with an EVER so Slight Apple Bias. Since 2005 her NosillaCast podcast has come out weekly without fail.
Outside of her extensive podcasting career, Allison also has decades of experience in IT. In this IT Origins interview, we discuss her move from mechanical engineering to IT, the gradual departmentalization of IT throughout her career, what is IT’s role in business, and the liberating definition of waste.
Plus, make sure to catch your humble interviewer give one of the most awkward definitions of DevOps!
Intent-Based Networking Isn’t Just SDN – The On-Premise IT Roundtable
Jan 02, 2018
Intent-based networking is the new hotness, but what does it actually mean? In this episode, the panel discusses how it differs from older SDN ideas. IBN integrates an abstraction layer and orchestration into a system that identifies a single source of truth that isn’t the network itself.
2017 was the Year of… – The On-Premise IT Roundtable
Dec 19, 2017
On today’s show, each of our roundtable panelists chose what was the hot ticket item of 2017. Tune in to hear their arguments why 2017 was the year of SD-WAN, HCI, Net Neutrality, or Data Management!
Failed Startups – The On-Premise IT Roundtable
Dec 05, 2017
On this episode, host Stephen Foskett talks with Mark May, Howard Marks, and Keith Townsend about what makes a failed startup. Are some concepts simply too early, or are there ideas whose time simply never arrives? They look at specific examples like Auspex Systems and Coho Data.
All Storage Should Scale-Out – The On-Premise IT Roundtable
Nov 21, 2017
Scale-out storage is great, but does it apply to all enterprise storage needs? The roundtable discusses the premise that all storage should be scale-out.
The CLI is Dead – The On-Premise IT Roundtable
Nov 07, 2017
The roundtable discusses the premise that the CLI is dead, or at least terminally ill. They look at why this is the prevailing narrative in networking and the greater IT landscape. Is the death of the CLI a forgone conclusion, or merely a framing device for moving to better processes in IT?
Security is a Dumpster Fire – The On-Premise IT Roundtable
Oct 24, 2017
Security is a dumpster fire, or is it somehow worse? Our esteemed guests discuss whether it’s actually many dumpster fires or some other form of refuse conflagration. It’s an invaluable and inflammable discussion.
Technical Debt Really Isn’t All That Bad – The On-Premise IT Roundtable
Oct 10, 2017
In this episode, the roundtable discusses the idea of technical debt. They look into why technical debt occurs, why it isn’t always a bad thing, and how to possibly optimize for when to incur it.
What Is Automation? The On-Premise IT Roundtable
Sep 26, 2017
On this episode, we’ll be talking about a hot topic in the networking space, automation. The panel discusses why organizations see automation as prohibitively complex, what exactly they mean by automation, and why it isn’t coming for their jobs.
Managed Services from the 90s to Now – The On-Premise IT Roundtable
Sep 12, 2017
On this episode, we take a look at how managed services have changed from the 90s to today. Why was something so essential and pervasive in IT today so hard to do a few decades ago? The panel has a wealth of industry experience from back then, and they share their stories from being on the front lines.
Cloud Lock-In: The On-Premise IT Roundtable
Aug 29, 2017
Cloud lock-in, sounds bad right? Well on this episode, the roundtable takes a detailed look at the actual impact of lock-in with public and private cloud providers. They look at how this impacts business agility, innovation, and overall company strategy.
Cloud is More Than a Data Center: The On-Premise IT Roundtable
Aug 15, 2017
With all the hyperbolic claims of what the cloud can do for IT, what the cloud actually means gets lost in the process. The roundtable looks at what cloud actually means in the modern enterprise. This includes the changes in workflows that need to happen to successfully migrate to the cloud. They go on to frame the cloud’s influences historically within other industries.
IoT Abandonware: The On-Premise IT Roundtable
Aug 01, 2017
The Internet of Things is already proliferating a number of connected devices into our lives. But as these devices increasingly become abandoned, they turn into security liabilities. The panel discusses the causes, implications, and solutions for IoT Abandonware.
The Brave New World of NVMe: The On-Premise IT Roundtable
Jul 18, 2017
The roundtable discusses how NVMe is impacting the storage industry. Is this just an iteration on what we’ve already seen with flash, or does it represent a sea change that will fundamentally change IT?
Is Kubernetes a Flash in the Pan? The On-Premise IT Roundtable
Jul 03, 2017
Is Kubernetes simply benefiting from the first mover advantage, or does it have the force to stay the dominant container orchestrator in the enterprise for years to come? The roundtable discusses.
Managing Your IT Career – The On-Premise IT Roundtable
Jun 20, 2017
IT professionals spend years learning how to manage the complex infrastructure that organizations depend on. But they often spend far less time thinking about how to manage their careers. The roundtable takes on this topic, looking into dealing with imposter syndrome, knowing your own worth, and how to navigate these potentially problematic waters.
Caching vs Tiering – The On-Premise IT Roundtable
Jun 06, 2017
Caching and tiering have been abused by marketing in enterprise IT, often used interchangeably, or simply when not applicable. Luckily, we’ve got a table, it’s round, and surrounded by storage experts. They’ll explain the technical differences between caching and tiering, how to identify which is being used, and what are the performance implications of each.
What is Big Data? The On-Premise IT Roundtable
May 23, 2017
To be clear, the answer to “what is big data?” isn’t the On-Premise IT Roundtable. Nevertheless, our panelists discuss what exactly they mean when they use the term, why it’s the new hotness, and how they’ve seen it impact organizations.
Intel and Network Functions Virtualization: The On-Premise IT Roundtable
May 09, 2017
Intel isn’t known as a networking company, but they think they have a play in the network functions virtualization market. The round table discusses what future Intel has in the space, and how they compete with more historic players in the market.
Virtualization and Containers: The On-Premise IT Roundtable 4
May 05, 2017
In the light of the vSphere 6.5 release, moderator Stephen Foskett asks the roundtable about the impact of VMware integrated containers. This runs on Photon OS , a lightweight Linux distribution that runs a single container. What is the impact of this integration, in terms of security, training, and administration? And more importantly, does the industry need vSphere butting into the container space?
Locations and Beacons: The On-Premise IT Roundtable 3
May 04, 2017
On tap for today’s roundtable, the panel discusses the state of locations and beacons. Moderator Stephen Foskett asks the panel to consider how location services factor into the greater enterprise mobility landscape. This goes from using beacons to give turn-by-turn navigation indoors to using location to cue print jobs. Often the backend of these application has been available for a while, but now new use cases are emerging.
Is DevOps a Disaster? The On-Premise IT Roundtable 2
May 03, 2017
Moderator Stephen Foskett poses a completely non-controversial question: is DevOps a load of crap? Does DevOps just turn into NoOps? What are these darn kids doing with our infrastructure? The roundtable debates all these questions and more.
Welcome to the inaugural On-Premise IT Roundtable podcast! This episode, we’ve gathered our esteemed panel to discuss software-defined wide area networking, SD-WAN. In this emerging market, how do you compare the various offerings in the space? Is the market bound for consolidation, or will it remain full of vibrant competition? And how does the ease of use of SD-WAN impact the market for network professionals?