Different ways to try to get prompts injected into AI tools

When an AI tool is asked to perform a task on a piece of data (e.g. to summarize it), it should not treat any of the content as instructions. But similar to old-style SQL injections, there are ways to insert instructions into the data.
Here are some of my experiments.
Because LLMs are non-deterministic, these examples may not produce identical results each time, but they still illustrate the possibilities. Additionally, AI models continue to evolve, meaning these injection techniques may become ineffective over time.