diff --git a/chapters/2-Methodology.tex b/chapters/2-Methodology.tex
index 7a92f8ce58427b2098b23441e4b757e7bb2c40e9..450ec45f51a99ca4eefddde06054e12756158bc0 100644
--- a/chapters/2-Methodology.tex
+++ b/chapters/2-Methodology.tex
@@ -1,17 +1,95 @@
 \chapter{Methodology}
 
-This chapter outlines the methodology used to compare various image processing libraries. The evaluation is grounded in two core performance metrics: \textbf{Image Conversion} and \textbf{pixel iteration}. These metrics provide a basis for comparing the efficiency and responsiveness of different libraries in performing fundamental image processing tasks. In the following sections, we explain why these metrics were chosen, how they are measured, how the results are processed, and the criteria for selecting the libraries under investigation.
+This chapter outlines the journey and rationale behind the methodology for comparing various image processing libraries. It explains our choice of performance metrics, describes how the metrics were obtained and processed, and details the criteria used to select the libraries under investigation. The aim is to provide an approach that not only yields quantitative insights but also connects with real-world applications.
 
-\input{sections/Chapter-2-sections/Performance-Metrics.tex}
-\input{sections/Chapter-2-sections/Rationale.tex}
-\input{sections/Chapter-2-sections/Measurement-Procedure.tex}
-\input{sections/Chapter-2-sections/Data-Analysis.tex}
-\input{sections/Chapter-2-sections/Library-Selection.tex}
+\section{Selection of Libraries for Comparison}
 
+The choice of libraries for this study was driven by several factors, including functionality, licensing, ease of integration, and performance potential. Most of image processing libraries provided wrappers or bindings for .NET, the language of choice for our experiments. The search of libraries revealed a wide range of options—from the commercial ImageSharp to various open-source alternatives such as OpenCvSharp, Emgu CV, SkiaSharp, Magick.NET, and others. 
 
+With consideration of real-world image processing applications needs, certain technical features were considered essential for our evaluation, such as support for common image formats (JPEG, PNG, BMP, WebP, etc.), mutative operations (e.g., pixel manipulation, color space conversion), and high-level operations (e.g., image composition, filtering). all libraries were evaluated based on their ability to handle these tasks efficiently by inspecting their APIs and documentation. Also the licensing model, integration effort, and community support were considered to ensure that the selected libraries were not only technically capable but also practical for real-world applications. The data gathered from this is available including the table of feature comparison that was created for each library, and available in the appendix.
 
+\begin{center}
+    \includegraphics[width=25em]{media/Methodology - selection.png}
+    \captionof{figure}{Selected Libraries for Comparative Evaluation}
+    \label{fig:selected-libraries}
+\end{center}
 
+This research process provided a clear picture of the capabilities of each library and helped us identify the most suitable candidates for our performance tests. As a result, 5 suggested libraries or combinations of libraries were selected for the comparative evaluation: ImageSharp and Magick.NET as single library solutions, because they been considered to have capabilities to cover cover both lightweight and complex image processing tasks. And the combinations of OpenCvSharp with SkiaSharp, and Emgu CV with SkiaSharp, as they complement each other in terms of performance and functionality.
 
+\section{Performance Metrics and Criteria for Comparison}
 
+Image processing is an integral part of many modern applications, from web services to real-time computer vision systems. To compare libraries by comparing their performance and practicality using a controlled benchmarking environment. The study focused on two key performance metrics: \textbf{Image Conversion} and \textbf{Pixel Iteration}. These metrics were selected because they represent foundational operations in image processing workflows, forming the building blocks for more complex tasks.
 
+The decision to focus on image conversion and pixel iteration was based on the need for metrics that could objectively and quantitatively measure core operations while remaining independent of higher-level library-specific features. Image conversion was chosen as it involves loading an image from disk, converting its format, and saving it back. This process mirrors common operations in web applications and desktop software where rapid image display is critical. 
 
+Pixel iteration, on the other hand, was selected to capture the efficiency of low-level image manipulation. Many image processing tasks, such as filtering, transformation, and color adjustment, require access to each pixel individually. By measuring the time taken to iterate over all pixels and apply a basic grayscale conversion, we obtained a clear indicator of the library’s capability in handling computationally intensive tasks. These metrics were chosen over alternatives like image saving speed or memory usage because they directly reflect two complementary dimensions: high-level operational overhead and low-level data processing efficiency.
+
+\begin{center}
+    \includegraphics[width=23em]{media/Methodology - 2.1.png}
+    \vspace{0em}
+    \captionof{figure}{Methodological Approach Overview}
+    \label{fig:methodology-overview}
+\end{center}
+
+Measurement techniques evolved from initial prototypes and iterative refinements. The importance of warm-up iterations was learned from experiments, as the system needed time to stabilize before meaningful measurements could be taken. This warm-up phase mitigated the effects of just-in-time compilation and caching, ensuring that subsequent iterations reflected steady-state performance rather than the anomalies of system initialization.
+
+\subsection{Defining the Image Conversion Metric}
+
+The image conversion test was designed as a JPEG image file loaded from disk, converted to PNG format, and then saved. The JPEG and PNG formats were chosen as examples of a common conversion scenario since JPEG is widely used for image storage and PNG is a lossless format suitable for web applications.  The entire process is timed from start to finish. This approach involves several steps that are repeated over many iterations. Initially, 5 iterations as warm-ups are executed to allow the system to stabilize. The warm-up durations are recorded separately and then excluded from the main performance analysis. Once the system is in a steady state, a fixed number of 100 iterations is performed, and the time taken for each one is recorded.
+
+\begin{center}
+    \includegraphics[width=23em]{media/Methodology - 2.2.1.png}
+    \captionof{figure}{Image Conversion Measurement Process}
+    \label{fig:image-conversion-measurement}
+\end{center}
+
+The .NET \texttt{Stopwatch} class is used to record the elapsed time for each stage of the process. By repeating this process for a series of iterations—first running several warm-up cycles and then main iterations—a dataset was generated, that could be averaged to produce normalized performance figures.
+
+\subsection{Defining the Pixel Iteration Metric}
+
+The pixel iteration metric targets the efficiency of low-level pixel operations, which are foundational for many advanced image processing techniques. The focus here was on isolating the per-pixel operation, independent of any higher-level image processing abstractions. The measured time provided insights into how efficiently each library handles large amounts of pixel data, a critical factor when scaling to high-resolution images or real-time processing tasks. 
+
+\begin{center}
+    \includegraphics[width=23em]{media/Methodology - 2.2.2.png}
+    \captionof{figure}{Pixel Iteration Measurement Process}
+    \label{fig:pixel-iteration-measurement}
+\end{center}
+ 
+
+In this test, the image is loaded into memory, and a nested loop iterates over each pixel. For each pixel, a basic grayscale conversion is applied by computing the average of the red, green, and blue channels, and then rewriting the pixel with the computed grayscale value. Similar to the image conversion test, a series of warm-up iterations is run to ensure the system has reached a stable state. After this phase, the main iterations are executed, and the time for each cycle is recorded. The key metric is the average time per iteration, which serves as an indicator of the library's efficiency in handling per-pixel operations. The rationale behind this metric is that many advanced image processing tasks, such as filtering or feature extraction, require efficient pixel-level manipulation. 
+
+\subsection{Criteria for Library Comparison}
+
+Our comparative evaluation was based on a set of well-defined criteria that reflect both technical performance and practical implementation considerations. The primary criteria were performance (as measured by our two key metrics), functionality (including support for a wide range of image processing tasks), ease of integration (the simplicity of adopting the library within a .NET environment), and licensing. In addition to performance, the integration of BenchmarkDotNet for memory profiling adds another layer to our analysis, allowing us to evaluate the trade-offs between speed and memory consumption. 
+
+Functionality was assessed by mapping each library’s capabilities against a comprehensive feature set that included image loading, pixel manipulation, format conversion, and high-level operations such as image composition. Ease of integration was evaluated by considering the availability of wrappers or bindings, the clarity of documentation, and the level of community support. Licensing was scrutinized not only in terms of costs but also in terms of the freedoms and restrictions imposed by each license (e.g., Apache 2.0 and MIT licenses versus commercial licensing models).Tables of feature comparison that was created for each library, and available in the appendix.
+
+\begin{center}
+    \includegraphics[width=16em]{media/Methodology - criteria.png}
+    \captionof{figure}{Library Selection Criteria}
+    \label{fig:library-selection-criteria}
+\end{center}
+
+Finally, our selection criteria for libraries are grounded in both technical and practical considerations, ensuring that our findings are relevant to a wide range of use cases—from small-scale applications to enterprise-level deployments.
+
+\section{Experimental Setup and Environment}
+
+The tests were conducted in a controlled environment to ensure reproducibility and accuracy. Insuring that the hardware setup and software environment were consistent across all experiments by using same machine to eliminate variability due to hardware differences. The software environment was configured with a timer, namely the \texttt{Stopwatch} class, which provided millisecond-level precision. And memory profiling was done using BenchmarkDotNet in separate tests. This allowed us to capture not only execution times but also memory allocations and garbage collection metrics, even though our primary focus remained on processing speed.
+
+\section{Data Collection and Processing}
+
+The collected data includes the total time taken for the warm-up phase, the average time per iteration during the main phase, and the cumulative time including warm-up. These metrics together provide a comprehensive view for both the image conversion and pixel iteration tests. The simplicity of this test allows for easy replication and clear comparisons among different libraries, which is essential when making performance-based decisions. Each iteration's timing data was recorded using the high-resolution \texttt{Stopwatch} class and stored in memory. Following the completion of each test, the raw data was exported to an Excel file using the EPPlus library. This allowes for statistical analysis later, such as calculating the mean, median, and standard deviation of the performance times. The Excel files also served as a repository for comparative charts and graphs, which will be used to visually represent our findings.
+
+\begin{center}
+    \includegraphics[width=33em]{media/Methodology - data.png}
+    \captionof{figure}{Data Collection and Processing Workflow}
+    \label{fig:data-collection-processing}
+\end{center}
+
+Additionally for memory profiling, the BenchmarkDotNet library was used to measure memory consumption during the tests. BenchmarkDotNet provides detailed memory allocation and garbage collection reports in console output, which were captured and stored. Then after analyzing the data, the results were aggregated and visualized to provide another layer of insight into the libraries' performance characteristics. These visuals were then used to inform our conclusions regarding speed, memory efficiency, and overall suitability for various tasks.
+
+\section{Conclusion}  
+
+The methodology adopted in this study is not only a tool for performance measurement but also a exploration and discovery. Each step—from defining the metrics to processing the data and selecting the libraries—was a choice aimed at isolating the factors that matter most in image processing.
+
+In conclusion, the methodology provides a robust framework for comparing image processing libraries. It highlights the critical trade-offs between speed, memory usage, ease of integration, and licensing costs. The insights derived from this study offer valuable guidance for developers and researchers alike, paving the way for more efficient and cost-effective image processing solutions in both academic and commercial settings.
diff --git a/media/Methodology - 2.1.png b/media/Methodology - 2.1.png
new file mode 100644
index 0000000000000000000000000000000000000000..559da832fa3efb8e67548b5fbead1919ab0b95ee
Binary files /dev/null and b/media/Methodology - 2.1.png differ
diff --git a/media/Methodology - 2.2.1.png b/media/Methodology - 2.2.1.png
new file mode 100644
index 0000000000000000000000000000000000000000..2333a4466847c94de01abc490caf3be12716eeef
Binary files /dev/null and b/media/Methodology - 2.2.1.png differ
diff --git a/media/Methodology - 2.2.2.png b/media/Methodology - 2.2.2.png
new file mode 100644
index 0000000000000000000000000000000000000000..7e51ae08e5a4c4393bc4ed0f3103dad0e9159c5f
Binary files /dev/null and b/media/Methodology - 2.2.2.png differ
diff --git a/media/Methodology - criteria.png b/media/Methodology - criteria.png
new file mode 100644
index 0000000000000000000000000000000000000000..1e3184266bd67ef8c4a187ffaaa4a620ab5a09dc
Binary files /dev/null and b/media/Methodology - criteria.png differ
diff --git a/media/Methodology - data.png b/media/Methodology - data.png
new file mode 100644
index 0000000000000000000000000000000000000000..400550652641023c43f3b3ae11522d33f46a465b
Binary files /dev/null and b/media/Methodology - data.png differ
diff --git a/media/Methodology - selection.png b/media/Methodology - selection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ff529dd29806aa0d7bc3f184380ea071e9b937be
Binary files /dev/null and b/media/Methodology - selection.png differ
diff --git a/media/log_1.png b/media/log_1.png
index a20192457906a80c72671b727271fbdd019264bd..2a8d041c24a4d8d896d3f55215f7023cc6d3a283 100644
Binary files a/media/log_1.png and b/media/log_1.png differ
diff --git a/media/log_2.png b/media/log_2.png
index 13e0227b42a7f1297b686a53151b9cf77a125c31..69cb6070dfc76c3f43d59a6eb29a1cb7704aeeb8 100644
Binary files a/media/log_2.png and b/media/log_2.png differ