The increasing adoption of Machine Learning (ML) models has raised concerns about their robustness to adversarial attacks. In the context of Network Intrusion Detection Systems (NIDS), adversarial attacks consist of manipulating the network traffic to degrade system performance by either performing evasion or inducing alarm fatigue. This study examines the vulnerability of Graph Neural Networks (GNN)-based NIDS under problem-space constraints, where attackers can only create new communications towards specific IP addresses, while ensuring consistency with network protocols. We demonstrate that these attacks constitute a weakly submodular optimization problem, solvable with simple greedy search algorithms. Our preliminary results confirm that injecting a few well-chosen connections significantly increases false alarms in state-of-the-art GNN-based NIDS. Finally, we outline a roadmap for future work on evasion attacks and the transferability of attacks across different GNN models.