diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index acd890f1c9480d3ccd28e73da94d7f7b34a0a1dd..df7fc56db817887545e0927bcb5d49e093e82223 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -19,6 +19,7 @@ build_pdf:
       - "output/${Name}.pdf"
   only:
     - main
+    - develop
 
 convert_md:
   stage: convert
diff --git a/chapters/2-Methodology.tex b/chapters/2-Methodology.tex
index 0ad4332be2a3e7fa1f17fc414b0f0648f49fe78b..7a92f8ce58427b2098b23441e4b757e7bb2c40e9 100644
--- a/chapters/2-Methodology.tex
+++ b/chapters/2-Methodology.tex
@@ -1 +1,17 @@
-\chapter{Methodology}
\ No newline at end of file
+\chapter{Methodology}
+
+This chapter outlines the methodology used to compare various image processing libraries. The evaluation is grounded in two core performance metrics: \textbf{Image Conversion} and \textbf{pixel iteration}. These metrics provide a basis for comparing the efficiency and responsiveness of different libraries in performing fundamental image processing tasks. In the following sections, we explain why these metrics were chosen, how they are measured, how the results are processed, and the criteria for selecting the libraries under investigation.
+
+\input{sections/Chapter-2-sections/Performance-Metrics.tex}
+\input{sections/Chapter-2-sections/Rationale.tex}
+\input{sections/Chapter-2-sections/Measurement-Procedure.tex}
+\input{sections/Chapter-2-sections/Data-Analysis.tex}
+\input{sections/Chapter-2-sections/Library-Selection.tex}
+
+
+
+
+
+
+
+
diff --git a/outdated/Data-Collection.tex b/outdated/Data-Collection.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5861fc825ee9f1f6ba035e967886bca9f943e000
--- /dev/null
+++ b/outdated/Data-Collection.tex
@@ -0,0 +1,33 @@
+\section{Data Collection and Analysis}
+
+To ensure that our performance measurements are both reliable and meaningful, we implement a rigorous data collection and analysis process:
+
+\subsection{ Repeated Trials and Averaging}
+
+Each experimental test (for both image loading and pixel iteration) is executed over a large number of iterations—typically 100 runs per library. This repetition helps smooth out transient variations and ensures that our measurements represent a consistent performance profile. The following steps are taken:
+
+\begin{itemize}
+    \item \textbf{Multiple Iterations:} For each library, the test is repeated numerous times under identical conditions.
+    \item \textbf{Statistical Averaging:} The mean time for each metric is computed from these iterations, providing a representative average performance figure.
+    \item \textbf{Variance Analysis:} Standard deviations and variances are calculated to assess the consistency of the results. Outlier values are identified and, if necessary, excluded to prevent skewed outcomes.
+\end{itemize}
+
+\subsection{ Memory Profiling}
+
+In addition to timing metrics, we capture memory consumption data to understand the resource efficiency of each library. This involves:
+
+\begin{itemize}
+    \item \textbf{Memory Allocation Tracking:} Monitoring the amount of memory allocated during image loading and pixel iteration operations.
+    \item \textbf{Garbage Collection Monitoring:} Recording the frequency and duration of garbage collection events to assess their impact on performance.
+    \item \textbf{Profiling Tools:} Utilizing integrated tools such as BenchmarkDotNet’s memory profiler, along with additional system-level profiling utilities, we record the peak memory usage and total allocations.
+\end{itemize}
+
+\subsection{ Statistical Analysis}
+
+The raw data collected from repeated trials is processed using statistical software to perform further analysis:
+
+\begin{itemize}
+    \item \textbf{Confidence Intervals:} Confidence intervals are calculated around the mean values to provide a measure of the reliability of our measurements.
+    \item \textbf{Comparative Statistical Tests:} Where applicable, we employ statistical tests (e.g., t-tests) to determine whether differences in performance metrics between libraries are statistically significant.
+    \item \textbf{Data Visualization:} Graphs and charts are generated to visually compare the performance distributions across libraries, offering an intuitive understanding of their relative efficiencies.
+\end{itemize}
\ No newline at end of file
diff --git a/outdated/Definition.tex b/outdated/Definition.tex
new file mode 100644
index 0000000000000000000000000000000000000000..639b629cd1cf8ad0fd76770e8fc40811302525f0
--- /dev/null
+++ b/outdated/Definition.tex
@@ -0,0 +1,27 @@
+\section{Definition and Rationale for the Metrics}
+
+\subsection{ Image Loading Time}
+
+\textbf{Definition:}  
+Image loading time is defined as the total elapsed time required to retrieve an image from a file system, decode it into a standardized internal representation (e.g., RGBA32), and initialize any associated data structures needed for further processing.
+
+\textbf{Rationale:}  
+\begin{itemize}
+    \item \textbf{I/O and Decoding Efficiency:} The time taken to load an image is a direct indicator of how efficiently a library handles input/output operations and decodes various image formats. In industrial applications, where images may be read in real time from cameras or sensors, fast loading is essential.
+    \item \textbf{Baseline for Processing Pipelines:} Since loading is the first step in any image processing pipeline, delays at this stage can cascade and affect overall system performance. An efficient image loading mechanism can significantly reduce the latency of the entire workflow.
+    \item \textbf{Minimizing Overhead:} The loading process also involves memory allocation and data structure initialization. A library that minimizes overhead in these areas will allow more resources to be devoted to the actual image processing tasks.
+    \item \textbf{Standardized Measurement:} The loading process can be uniformly measured across different libraries by using the same image datasets and environmental conditions, ensuring that the comparisons are fair and reproducible.
+\end{itemize}
+
+\subsection{ Pixel Iteration Time}
+
+\textbf{Definition:}  
+Pixel iteration time refers to the duration required to traverse every pixel in an image and apply a predefined, uniform operation—such as converting an image to grayscale. This metric captures the efficiency of the library’s core routines for low-level data manipulation.
+
+\textbf{Rationale:}
+\begin{itemize}
+    \item \textbf{Core Computation Performance:} Many image processing tasks (e.g., filtering, thresholding, and color adjustments) require examining or modifying every pixel. The speed at which a library can iterate over pixels is a direct measure of its processing efficiency.
+    \item \textbf{Algorithmic Sensitivity:} Pixel-level operations are sensitive to implementation details like loop unrolling, memory access patterns, and caching strategies. Faster iteration times imply better-optimized routines.
+    \item \textbf{Simplicity and Reproducibility:} Converting an image to grayscale is a simple and commonly used operation that can serve as a proxy for other pixel-level tasks. Its simplicity makes it an ideal candidate for standardizing comparisons across libraries.
+    \item \textbf{Isolation of Low-Level Performance:} By isolating the pixel iteration operation from higher-level tasks, we can specifically evaluate the efficiency of the library’s data structures and internal algorithms without interference from more complex operations.
+\end{itemize}
\ No newline at end of file
diff --git a/outdated/Measurement-Procedure.tex b/outdated/Measurement-Procedure.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e2de2991771cb0279059c4b8d141dd05c4751871
--- /dev/null
+++ b/outdated/Measurement-Procedure.tex
@@ -0,0 +1,43 @@
+\section{Measurement Procedure}
+
+\subsection{ Experimental Setup}
+
+To ensure consistency and reliability, all tests are performed under controlled conditions:
+
+\begin{itemize}
+    \item \textbf{Hardware Consistency:} All experiments are conducted on the same machine with a fixed hardware configuration. This eliminates variability due to differences in CPU speed, memory, or storage performance.
+    \item \textbf{Software Environment:} We use a consistent operating system and development environment (e.g., .NET framework) across all tests. Timing is measured using high-precision tools such as BenchmarkDotNet to capture accurate performance metrics.
+    \item \textbf{Image Dataset:} A standardized dataset of images is used for testing. This dataset includes images of varying resolutions and formats to simulate real-world industrial scenarios.
+    \item \textbf{Repetition and Averaging:} Each test is repeated multiple times (e.g., 100 iterations) to account for random fluctuations and to ensure that the measured performance is statistically significant. The average and variance of the results are computed to assess consistency.
+\end{itemize}
+
+\subsection{ Measuring Image Loading Time}
+
+The procedure for measuring image loading time consists of the following steps:
+
+\begin{itemize}
+    \item \textbf{File Access Initiation:} The test begins by initiating a file read operation from the disk. The image file is selected from a predetermined dataset.
+    \item \textbf{Decoding and Conversion:} Once the file is accessed, the image is decoded into a standardized internal format, such as RGBA32. This step includes converting the raw image data (e.g., JPEG, PNG) into a format that is readily usable by the library.
+    \item \textbf{Initialization of Data Structures:} Any necessary memory allocation and initialization of internal data structures are performed at this stage.
+    \item \textbf{Timing the Operation:} A high-resolution timer records the time from the initiation of the file read operation until the image is fully loaded and ready for processing.
+    \item \textbf{Repetition and Averaging:} This process is repeated multiple times, and the average loading time is computed. Variability in the measurements is analyzed using standard deviation metrics to ensure reproducibility.
+\end{itemize}
+
+\subsection{ Measuring Pixel Iteration Time}
+
+The measurement of pixel iteration time is carried out in a similar systematic manner:
+\begin{itemize}
+    \item \textbf{Image Loading:} Prior to the iteration test, the image is loaded into memory using the same process as described above. This ensures that the image is in a known, consistent state.
+    \item \textbf{Pixel Operation Execution:} A simple operation is defined (e.g., converting each pixel to its grayscale equivalent). The algorithm iterates over every pixel, reading its RGB values and computing the grayscale value based on a weighted sum.
+    \item \textbf{Timing the Iteration:} The entire pixel iteration process is timed from the moment the iteration begins until every pixel has been processed. High-precision timers are used to capture this duration.
+    \item \textbf{Isolation of the Operation:} To ensure that the measurement reflects only the time for pixel iteration, other processes (such as file I/O) are not included in this timing.
+    \item \textbf{Multiple Iterations:} Like the image loading test, the pixel iteration test is repeated many times (e.g., 100 iterations) to obtain an average processing time. Outliers are analyzed and removed if they are deemed to be due to external interference.
+\end{itemize}
+
+\subsection{ Tools and Instrumentation}
+
+\begin{itemize}
+    \item \textbf{Benchmarking Framework:} BenchmarkDotNet is used as the primary tool for performance measurement. It provides accurate timing measurements and can also track memory usage.
+    \item \textbf{Profiling Utilities:} Additional profiling tools are employed to monitor memory allocation and garbage collection events. This ensures that both time and resource consumption are captured.
+    \item \textbf{Data Logging:} All measurements are logged for further statistical analysis. This raw data is later processed to compute averages, standard deviations, and confidence intervals, forming the basis for our comparative analysis.
+\end{itemize}
\ No newline at end of file
diff --git a/outdated/Overview.tex b/outdated/Overview.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e3092c7ed92224e0b6110673be2138abd26c8333
--- /dev/null
+++ b/outdated/Overview.tex
@@ -0,0 +1,19 @@
+\section{Overview}
+
+Image processing tasks in industrial applications typically involve two fundamental operations: acquiring image data (loading) and performing pixel-level computations (iteration). The efficiency of these operations directly influences the overall performance of any image processing system. In our evaluation, we have chosen to focus on two key metrics:
+
+\begin{itemize}
+    \item \textbf{Image Loading Time:} The time taken to load an image from persistent storage into the system's memory.
+    \item \textbf{Pixel Iteration Time:} The duration required to traverse and process each pixel in an image, exemplified by converting the image to grayscale.
+\end{itemize}
+
+These metrics are chosen for several reasons:
+
+\begin{itemize}
+    \item \textbf{Universality:} Both operations are common to nearly all image processing workflows, regardless of the complexity of the subsequent processing steps.
+    \item \textbf{Fundamental Performance Indicators:} The speed of image loading reflects the efficiency of file I/O, image decoding, and memory allocation, while pixel iteration performance indicates how well a library can handle low-level data manipulation—an operation that is central to filtering, enhancement, and other pixel-based computations.
+    \item \textbf{Comparability and Reproducibility:} By using standardized tasks that all libraries must perform, we can compare their performance on a like-for-like basis. This approach minimizes variability and provides a clear baseline for comparing otherwise diverse systems.
+    \item \textbf{Hardware Independence:} These metrics are less influenced by high-level algorithmic choices and more by the underlying implementation and optimizations, making them suitable for benchmarking across different libraries and platforms.
+\end{itemize}
+
+While there are many other potential metrics (such as encoding time, advanced filtering speed, or transformation accuracy), we selected image loading and pixel iteration because they are both critical and universally applicable operations. They provide a controlled environment for performance measurement and are directly relevant to the low-level efficiency needed in industrial scenarios.
\ No newline at end of file
diff --git a/outdated/Selection-Criteria.tex b/outdated/Selection-Criteria.tex
new file mode 100644
index 0000000000000000000000000000000000000000..64c60729b67711ef5cb15ffd8f713bee06dfa6be
--- /dev/null
+++ b/outdated/Selection-Criteria.tex
@@ -0,0 +1,52 @@
+\section{Selection Criteria for Image Processing Libraries}
+
+Selecting the appropriate libraries for our comparison is a critical step that shapes the overall evaluation. We employ a set of comprehensive criteria to ensure that only relevant and robust image processing libraries are included:
+
+\subsection{ Functional Coverage}
+
+The primary requirement is that the library must support the core operations fundamental to image processing:
+
+\begin{itemize}
+    \item \textbf{Image Loading and Creation:} The library should efficiently load images from various formats and support the creation of new images.
+    \item \textbf{Pixel Manipulation:} It must provide mechanisms for direct pixel access and manipulation, which are essential for tasks like filtering, transformation, and color adjustments.
+    \item \textbf{Transformation Capabilities:} Support for resizing, cropping, and color space conversions is essential to evaluate overall processing flexibility.
+\end{itemize}
+
+\subsection{ Performance and Resource Efficiency}
+
+Given the industrial context, the following performance aspects are prioritized:
+
+\begin{itemize}
+    \item \textbf{Low Image Loading Time:} Efficient I/O and decoding capabilities ensure that the system can handle high volumes of image data.
+    \item \textbf{Fast Pixel Iteration:} The library must exhibit optimized routines for traversing and processing pixels, indicating low-level efficiency.
+    \item \textbf{Memory Usage:} Efficient memory management is critical, particularly when processing high-resolution images or large batches. Libraries with minimal memory overhead and low garbage collection impact are preferred.
+\end{itemize}
+
+\subsection{ Ease of Integration}
+
+Practical integration into existing industrial systems is another important criterion:
+
+\begin{itemize}
+    \item \textbf{System Compatibility:} The library should seamlessly integrate with our existing software stack (e.g., the .NET framework).
+    \item \textbf{Documentation and Community Support:} Comprehensive documentation and active community support facilitate adoption and troubleshooting.
+    \item \textbf{Modularity and Extensibility:} A modular design that allows for the easy addition of custom functionalities is advantageous for industrial applications that may have evolving requirements.
+\end{itemize}
+
+\subsection{ Licensing and Cost Considerations}
+
+While the focus is on performance, practical deployment also depends on the licensing terms and cost:
+
+\begin{itemize}
+    \item \textbf{Open-Source or Cost-Effective Licensing:} Libraries with permissive or cost-effective licenses are preferred, as they reduce the total cost of ownership.
+    \item \textbf{Long-Term Maintenance:} Consideration of ongoing maintenance costs and the ease of future updates is essential for sustainable industrial deployment.
+\end{itemize}
+
+\subsection{ Relevance to Industrial Applications}
+
+Finally, the chosen libraries must demonstrate applicability to real-world industrial scenarios:
+
+\begin{itemize}
+    \item \textbf{Real-Time Processing:} The ability to handle real-time image processing is crucial for applications such as quality control and automated inspection.
+    \item \textbf{Scalability:} The library should efficiently manage large datasets and high-resolution images.
+    \item \textbf{Robustness:} Proven reliability in diverse industrial conditions (e.g., varying lighting, environmental noise) is essential for practical deployment.
+\end{itemize}
\ No newline at end of file
diff --git a/outdated/Summary.tex b/outdated/Summary.tex
new file mode 100644
index 0000000000000000000000000000000000000000..901666e3c7905b89452c4174793e8c9398218286
--- /dev/null
+++ b/outdated/Summary.tex
@@ -0,0 +1,9 @@
+\section{Summary}
+
+This chapter has presented a detailed methodology for the comparative evaluation of image processing libraries. We focused on two fundamental performance metrics—image loading time and pixel iteration time—selected for their universality, reproducibility, and direct relevance to low-level image processing tasks. The measurement procedures involve controlled experiments using standardized image datasets, with repeated trials to ensure statistical reliability and comprehensive memory profiling to capture resource efficiency.
+
+Furthermore, we outlined the selection criteria used to choose the libraries for evaluation, which include functional coverage, performance, ease of integration, licensing, and industrial applicability. By applying these criteria, we ensure that our comparative analysis is grounded in practical relevance and technical rigor.
+
+The methodology described in this chapter forms the backbone of our experimental evaluation. It provides a clear, structured framework for measuring and analyzing the performance of different image processing libraries. This framework not only facilitates direct comparisons but also helps identify trade-offs between speed, memory efficiency, and ease of integration—factors that are critical for the deployment of image processing solutions in industrial applications.
+
+With the methodology established, the following chapters will present the experimental results and discuss their implications for selecting the most suitable image processing library for industrial applications.
\ No newline at end of file
diff --git a/sections/Chapter-2-sections/Data-Analysis.tex b/sections/Chapter-2-sections/Data-Analysis.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d6d24557a4d38d499e8d60a1e631235dbe0b2b38
--- /dev/null
+++ b/sections/Chapter-2-sections/Data-Analysis.tex
@@ -0,0 +1,21 @@
+\section{Data Analysis and Result Processing}
+
+\subsection{Data Collection and Organization}
+For both the image loading and pixel iteration tests, data is collected in three key components:
+\begin{itemize}
+    \item \textbf{Warm-Up Time:} This phase helps stabilize the runtime environment and mitigate the effects of just-in-time compilation or caching. The warm-up durations are recorded separately to ensure that subsequent measurements reflect a stable runtime state.
+    \item \textbf{Average Time Excluding Warm-Up:} By averaging the time taken for the main iterations (after warm-up), a normalized metric is produced that indicates the typical performance during steady-state execution.
+    \item \textbf{Total Time Including Warm-Up:} This cumulative measure provides insight into the overall time investment required for the complete process, giving a holistic view of performance.
+\end{itemize}
+
+All measurements are recorded using high-resolution timers (e.g., the .NET `Stopwatch` class) to ensure accuracy. The results from each iteration are exported into an Excel spreadsheet using a library like EPPlus. This allows for further statistical analysis, graphing, and comparison among the libraries.
+
+\subsection{Processing and Analysis}
+The processing of the raw timing data involves:
+\begin{itemize}
+    \item \textbf{Statistical Aggregation:} Calculating mean, median, and, where relevant, variance helps in understanding not only the average performance but also the consistency of the libraries.
+    \item \textbf{Comparative Visualization:} Using graphs and charts, the results are visualized side by side. These visuals assist in identifying performance bottlenecks or outliers.
+    \item \textbf{Performance Profiling:} By comparing the image conversion and pixel iteration metrics, the analysis highlights the trade-offs between high-level and low-level performance. For example, a library might excel in fast image loading but could be less efficient in per-pixel operations.
+\end{itemize}
+
+This structured analysis ensures that the evaluation is objective, reproducible, and covers both high-level and low-level aspects of image processing.
diff --git a/sections/Chapter-2-sections/Library-Selection.tex b/sections/Chapter-2-sections/Library-Selection.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dab82bccad3bf6f83c79e0ceff06b6d30ba2148f
--- /dev/null
+++ b/sections/Chapter-2-sections/Library-Selection.tex
@@ -0,0 +1,27 @@
+\section{Library Selection Criteria}
+
+\subsection{Selection Process}
+The libraries were chosen based on a combination of performance benchmarks, feature sets, and cost considerations. The selection process involved the following steps:
+\begin{itemize}
+    \item \textbf{Initial Survey:} A comprehensive review of available image processing libraries in the .NET and cross-platform ecosystems was conducted.
+    \item \textbf{Feature Mapping:} Each library was evaluated for core functionalities such as image loading, pixel manipulation, support for multiple pixel formats, and the ability to perform complex operations (e.g., cropping, resizing, and layer composition).
+    \item \textbf{Cost Analysis:} Given that ImageSharp carries a recurring licensing cost, alternatives were considered if they offered similar capabilities at a lower or no cost. This factor was critical for long-term sustainability.
+\end{itemize}
+
+\subsection{Criteria for Comparison}
+The following criteria were used to compare and select libraries:
+\begin{itemize}
+    \item \textbf{Performance:} Measured via the defined metrics (image conversion and pixel iteration times). Libraries with faster and more consistent performance were favored.
+    \item \textbf{Functionality:} The library’s ability to handle a broad spectrum of image processing tasks. This includes both low-level operations (like direct pixel access) and high-level operations (like image composition).
+    \item \textbf{Ease of Integration:} Libraries that offered simple integration with the .NET framework (or had comprehensive wrappers) were prioritized. The ease of adoption and the availability of community support or documentation were also key considerations.
+    \item \textbf{Cost and Licensing:} Free and open-source libraries were preferred. However, if a commercial library provided substantial performance or feature benefits—justifying its cost—it was also considered.
+    \item \textbf{Scalability and Maintainability:} Future scalability was taken into account. A library that could efficiently handle larger images or more complex processing tasks was seen as more future-proof.
+\end{itemize}
+
+\subsection{Rationale for Criteria}
+The selection criteria were chosen to ensure that the chosen libraries would meet the following key requirements:
+\begin{itemize}
+    \item \textbf{Performance and Functionality:} At the heart of image processing is the ability to quickly and efficiently manipulate image data. The chosen metrics directly reflect these capabilities. Any library that underperformed in these areas would risk impacting the overall user experience.
+    \item \textbf{Ease of Integration and Cost:} For both academic and practical applications, it is important that the chosen solution does not require extensive re-engineering or incur significant ongoing costs. Libraries that can be integrated with minimal effort and without additional licensing fees are therefore more attractive.
+    \item \textbf{Scalability:} As image resolutions continue to increase and applications demand more real-time processing, scalability becomes a critical factor. Libraries with proven performance in handling large datasets are better suited for future challenges.
+\end{itemize}
diff --git a/sections/Chapter-2-sections/Measurement-Procedure.tex b/sections/Chapter-2-sections/Measurement-Procedure.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2144fd9d407a441a57b124853ca93c1ca7cd337f
--- /dev/null
+++ b/sections/Chapter-2-sections/Measurement-Procedure.tex
@@ -0,0 +1,37 @@
+\section{Measurement Procedure}
+
+\subsubsection{Experimental Setup}
+Two sets of benchmark tests were developed:
+\begin{itemize}
+    \item \textbf{Image Conversion Benchmark:} 
+    \begin{itemize}
+        \item \textbf{Process:} The test reads an image from disk, performs a format conversion (reading a JPG and writing a PNG), and then saves the output.
+        \item \textbf{Measurement:} A high-resolution timer (the `Stopwatch` class in .NET) is used to record the elapsed time for each operation.
+        \item \textbf{Iterations:} A series of 100 iterations is executed, preceded by a number of warm-up iterations to mitigate the effects of just-in-time compilation and caching.
+    \end{itemize}
+    \item \textbf{Pixel Iteration Benchmark:}
+    \begin{itemize}
+        \item \textbf{Process:} The image is loaded, and every pixel is accessed sequentially. For each pixel, a simple grayscale conversion is applied.
+        \item \textbf{Measurement:} Similar to the conversion test, the time taken for each iteration is recorded using a high-resolution timer.
+        \item \textbf{Iterations:} Again, a fixed number of warm-up iterations is used before measuring the main iterations.
+    \end{itemize}
+\end{itemize}
+
+\subsubsection{Data Collection and Processing}
+\begin{itemize}
+    \item \textbf{Warm-Up and Main Iterations:}
+    \begin{itemize}
+        \item Warm-up iterations are run to stabilize the runtime environment. Their durations are recorded separately to ensure that only steady-state performance is analyzed.
+        \item The main iterations are then executed, and the time for each iteration is recorded.
+    \end{itemize}
+    \item \textbf{Metrics Computed:}
+    \begin{itemize}
+        \item \textbf{Warm-Up Time:} Total time consumed during warm-up iterations.
+        \item \textbf{Average Time Excluding Warm-Up:} The mean duration of the main iterations, providing a normalized measure of performance.
+        \item \textbf{Total Time Including Warm-Up:} The cumulative duration that includes both warm-up and main iterations.
+    \end{itemize}
+    \item \textbf{Result Storage:}
+    \begin{itemize}
+        \item The results are written to an Excel file using a library like EPPlus, which facilitates further analysis and visualization. This systematic storage allows for easy comparison across different libraries.
+    \end{itemize}
+\end{itemize}
diff --git a/sections/Chapter-2-sections/Performance-Metrics.tex b/sections/Chapter-2-sections/Performance-Metrics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..34393331629b1823e9c97a2f84c6c7f7f960812d
--- /dev/null
+++ b/sections/Chapter-2-sections/Performance-Metrics.tex
@@ -0,0 +1,21 @@
+\section{Performance Metrics}
+
+\subsection{Image Conversion}
+Image loading time—often measured as part of an image conversion test—quantifies the duration required to:
+
+\begin{itemize}
+    \item \textbf{Load an image from disk:} This involves reading the file and decoding the image data.
+    \item \textbf{Perform a format conversion:} Typically, the image is converted from one format to another (e.g., JPG to PNG), simulating a common operation in many applications.
+    \item \textbf{Save the processed image back to disk.}
+\end{itemize}
+
+This metric is crucial because it directly impacts user experience in applications where quick display and manipulation of images are required. In scenarios such as web services or interactive applications, delays in image loading can degrade performance noticeably.
+
+\subsection{Pixel Iteration}
+Pixel iteration measures the time taken to traverse every pixel in an image and apply a simple operation—typically converting each pixel to grayscale. This metric isolates:
+\begin{itemize}
+    \item \textbf{Low-level processing efficiency:} Since many image operations (e.g., filtering, transformations) involve per-pixel computations, the speed at which a library can iterate over pixel data is a fundamental indicator of its performance.
+    \item \textbf{Scalability:} As image resolutions increase, efficient pixel-level operations become critical.
+\end{itemize}
+
+These two metrics were selected because they represent two complementary aspects of image processing: the high-level overhead of loading and converting images, and the low-level efficiency of pixel manipulation.
\ No newline at end of file
diff --git a/sections/Chapter-2-sections/Rationale.tex b/sections/Chapter-2-sections/Rationale.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2b449d141f56136df11342393c32e0db3988286e
--- /dev/null
+++ b/sections/Chapter-2-sections/Rationale.tex
@@ -0,0 +1,16 @@
+\section{Rationale for Metric Selection}
+
+\subsubsection{Why These Metrics?}
+\begin{itemize}
+    \item \textbf{Foundational Operations:} Both image loading and pixel iteration are ubiquitous in image processing workflows. Almost every operation—from simple transformations to complex filtering—builds upon these basic tasks.
+    \item \textbf{Reproducibility and Objectivity:} Measuring the time for these operations yields quantitative, repeatable data that can be used to objectively compare different libraries.
+    \item \textbf{Application Relevance:} Many real-world applications, including web services, mobile apps, and desktop software, require fast image loading for improved responsiveness and efficient pixel processing for real-time manipulation.
+\end{itemize}
+
+\subsubsection{Why Not Other Metrics?}
+While other metrics—such as memory usage, image saving speed, or complex transformation performance—could also provide valuable insights, the selected metrics were prioritized because:
+\begin{itemize}
+    \item \textbf{Simplicity:} The chosen metrics are straightforward to measure with minimal external dependencies.
+    \item \textbf{Isolation of Core Operations:} Focusing on these metrics allows for a clear comparison of the libraries’ core capabilities without conflating results with higher-level or library-specific optimizations.
+    \item \textbf{Baseline Performance Indicator:} They serve as a baseline against which more complex operations can later be contextualized if needed.
+\end{itemize}