The need for efficient data movement in computationally intensive applications has driven advancements in Peer-to-Peer Direct Memory Access (P2PDMA) technology. This need has been greatly exaggerated when discussing AI related applications (information gathering, training, and inference). P2PDMA facilitates direct data transfers between PCIe devices, bypassing the system memory and reducing latency and bandwidth bottlenecks. This framework has been around for many years but is finally starting to see some upstream traction. This presentation will delve into recent significant upgrades to the P2PDMA framework, explore test setups leveraging NVMe Computational Storage, and discuss real-world use cases demonstrating the benefits of P2PDMA and go into further detail of how this can be used in an ever-increasing AI world.