Contact Us

Context Engineering vs Prompt Engineering: What’s More Critical for AI-Driven Testing?

AI blog

The emergence of Artificial Intelligence, especially in the form of Large Language Models (LLMs), has generated innovative ideas in the field of Software Testing. AI is now being used to generate and automate test cases, proving to be a valuable aid for quality engineers. As teams incorporate GenAI into their workflows, a crucial question arises: Is prompt engineering the key to productivity, or is it context engineering? Let’s unpack both and see why context engineering might hold the key to scalable, intelligent, and reliable AI-driven testing. Prompt Engineering: Quick Results, Limited Depth Prompt engineering is the craft of writing instructions or questions tailored to get the best response from an AI model. In software testing, this often looks like: “Write 10 boundary test cases for a login form.” “Generate Selenium code to test a shopping cart checkout.” “Summarize this test suite for product owners.” Prompting is flexible and magical for rapid experiments. However, its effectiveness depends heavily on the exact phrasing, making it useful for quick tasks but less consistent in structured, repeatable environments. Challenges include: Reliance on explicit information in the prompt. Struggles with domain-specific logic and evolving business rules. Prompt engineering excels at: Quickly generating edge case scenarios. Converting requirements to test steps. Producing test data for negative testing. Context Engineering: The Key to Scalable AI Context engineering is the discipline of designing the environment in which an AI operates. This means supplying the model with relevant metadata, documents, historical test cases, business rules, and logs- everything it needs to see the big picture before generating a response. Instead of just prompting “Write a test case for checkout failure,” context engineering equips the AI with prior test cases, detailed product documentation, and system logs. The result: AI-generated test cases are traceable, relevant, and context-aware. Benefits of testing include: Understanding domain-specific rules (e.g., financial, healthcare compliance). Automatically updating test cases as user stories evolve. Correlating bugs to test results and code commits. Context engineering enhances AI’s capabilities, enabling it to align testing with business logic and minimize manual oversight. Why Context Matters Most Software testing demands coverage, accuracy, risk mitigation, and accountability—not just content generation. Context engineering stands out because it: Ground AI responses in real system knowledge, reducing hallucinations. Enables reusability across test scenarios, releases, and environments. Improves traceability to requirements and defects. Supports domain-specific tuning for different industries. Prompt engineering may impress during demos, but context engineering delivers resilience in production environments. Best Practice: Use Both, But Prioritize Context Prompting offers precision, while context provides depth. For teams building AI-augmented testing frameworks, long-term value lies in investing more into context. Steps to get started: Ingest requirements, previous test cases, architecture diagrams, user flows, and defect logs into a context repository. Define structured schemas for AI to access and interpret these assets. Layer targeted prompts on this solid foundation. Think of it this way: Prompting tells the AI what to do; context tells it how and why. Practical Implementation for Test Teams To operationalize context engineering: Start by collecting core test assets (requirements, past test cases, architecture, user flows, defects). Build a context repository accessible by your LLM. Pair with focused prompts, such as “Generate regression cases for changed modules” with the AI referencing release and dependency histories. Always validate AI outputs. Human oversight ensures accuracy and aligns results with business objectives. Summary As GenAI continues to evolve, testers who embrace context engineering will go beyond simple automation—they’ll become curators of intelligence in the software lifecycle. It’s not about asking better questions; it’s about making the AI smarter before you ask. And in a world where speed meets complexity, that might be the competitive edge your testing practice needs.

Bringing gpt-2 to android with kerasnlp: odml guide

Android developers and AI enthusiasts are exploring the prospect of running powerful language models like GPT-2 directly on your Android device. The KerasNLP workshop from IO2023 has all the insights one might need to make it happen. Here’s a detailed guide to integrating GPT-2 as an On-Device Machine Learning (ODML) model on Android using KerasNLP. Why use ODML on Android? On-device machine learning offers several benefits: Latency: No need to wait for server responses. Privacy: Data stays on the device. Offline Access: Works without internet connectivity. Reduced Costs: Lower server and bandwidth costs.   Setting up the environment: The first requirement in setting up an environment is the need for a robust setup on your development machine. Developers need to make sure they have Python installed along with TensorFlow and KerasNLP. Install KerasNLP using: pip install keras-nlp Loading and Preparing GPT-2 with KerasNLP KerasNLP simplifies the process of loading pre-trained models. For the developers’ purposes, they should load GPT-2 and prepare it for ODML. from keras_nlp.models import GPT2 model = GPT2.from_pretrained(‘gpt2’) Fine-tuning GPT-2: To make the model more relevant for one’s Android application, fine-tuning on a specific dataset is recommended. # Example of fine-tuning the model model.fit(dataset, epochs=3) Converting the model for Android: Once the model is fine-tuned, the next step is to convert it into a TensorFlow Lite (TFLite) format, which is optimized for mobile devices. import tensorflow as tf converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # Save the model to a file with open(‘model.tflite’, ‘wb’) as f: f.write(tflite_model) Integrating the TFLite model in Android: Step 1: Add TensorFlow Lite dependency Add the TensorFlow Lite library to your build.gradle file. implementation ‘org.tensorflow:tensorflow-lite:2.7.0’ Step 2: Load the model in the Android app Place the model.tflite file in the assets directory and write code to load and run the model using Kotlin. suspend fun initModel(){ withContext(dispatcher) { val loadResult = loadModelFile(context) // Load the model file // Check if loading was successful if (loadResult.isFailure) { val exception = loadResult.exceptionOrNull() return@withContext when (exception) { is FileNotFoundException -> //Handle FileNotFoundException else -> //Handle Exception } } // Initialize the interpreter with the loaded model val model = loadResult.getOrNull() isInitialized = model?.let { interpreter = Interpreter(it) } } } Running inference: Prepare your input data and call the runInterpreter method to get predictions. @WorkerThread private fun runInterpreter(input: String): String { private val outputBuffer = ByteBuffer.allocateDirect(OUTPUT_BUFFER_SIZE)   // Run interpreter, which will generate text into outputBuffer interpreter.run(input, outputBuffer)   // Set output buffer limit to current position & position to 0 outputBuffer.flip()   // Get bytes from output buffer val bytes = ByteArray(outputBuffer.remaining()) outputBuffer.get(bytes) outputBuffer.clear() // Return bytes converted to String return String(bytes, Charsets.UTF_8) } Final thoughts  Integrating ODML with KerasNLP and TensorFlow Lite can transform one’s Android device into a powerhouse for real-time NLP tasks. Whether it’s for chatbots, language translation, or content generation, the capabilities are now in the palm of your hand.

Leveraging GenAI in Ideation and Planning Phase of Mobile SDLC

artificial-intelligence

In the ideation and planning phases of the Software Development Life Cycle (SDLC) for mobile applications, GenAI offers transformative capabilities that simplify and enhance these critical stages. By automating idea generation, analyzing industry trends, conducting comprehensive market research, creating detailed user personas, and fostering creativity, GenAI ensures that the resulting applications are innovative, user-centric, and well-aligned with current market needs and trends. This article delves into how GenAI tools can be leveraged during the ideation and planning stages of various use cases within the AgTech domain. These insights will be particularly useful for designing business solutions tailored to farmers’ needs. Example: A farm management mobile application serves as a comprehensive software solution aimed at helping farmers and agricultural businesses streamline their daily operations. Such an app could encompass features that track and monitor various aspects of farm management, including crop yields, livestock health, and inventory levels. Let’s explore how GenAI contributes to different areas of this phase in the SDLC: 1. Automated Brainstorming: GenAI tools, such as ChatGPT, can generate a diverse array of ideas based on initial inputs, significantly broadening the scope of possibilities. Example: Consider a Crop Management App. GenAI could suggest features like real-time satellite imagery for assessing crop health, automated irrigation scheduling, or AI-driven pest and disease prediction systems.   2. Concept Development: Once basic ideas are generated, GenAI can further develop and refine these concepts, adding depth and detail to initial thoughts. Example: Enhancing Crop Monitoring could involve integrating IoT devices for real-time soil moisture monitoring, utilizing drone imagery for detailed crop health analysis, and employing AI algorithms for predictive analytics on crop yields.   3. Trend Analysis: GenAI has the capability to analyze vast amounts of data from various sources, identifying current trends and predicting future opportunities. Example: Analyzing social media data could reveal a rising trend in organic farming, while market research might identify a growing demand for apps that promote sustainable farming practices.   4. Market Research and Competitor Analysis: GenAI can rapidly assess competitor applications, pinpointing their strengths, weaknesses, and uncovering potential market gaps. Example: For an Agribusiness Insights App, GenAI might identify that competitor apps excel in weather prediction features but lack real-time pest detection capabilities. This opens up opportunities to integrate AI-driven pest detection and offer more comprehensive soil health analysis.   5. Generating User Personas and Stories: GenAI can create detailed user personas by analyzing demographic data, user behaviors, and preferences, which are essential for developing user-centric applications. Example: A user persona might represent a small-scale organic farmer seeking eco-friendly pest control methods. The corresponding user story could be: “As a small-scale farmer, I want an app that provides natural pest control solutions so I can maintain my organic certification.”   6. Enhanced Creativity and Innovation: GenAI continually stimulates creativity and innovation by offering a steady stream of fresh ideas and new perspectives. Example: For a Precision Agriculture App, potential features might include real-time analysis of drone imagery, automated irrigation control based on soil moisture data, and AI-driven crop health assessments.   Conclusion: By leveraging GenAI in the ideation and planning phases of the SDLC, particularly in the AgTech domain, developers and businesses can craft mobile applications that are not only technologically advanced but also precisely aligned with the needs of farmers. The integration of automated brainstorming, concept development, trend analysis, market research, user persona generation, and innovative ideas ensures that the resulting applications are robust, user-friendly, and equipped to meet the evolving demands of the agricultural sector.