In the context of global efforts toward carbon neutrality, electric vehicles (EVs) have emerged as a pivotal technology for reducing greenhouse gas emissions. The EV battery pack, serving as the core power source, directly influences the safety, efficiency, and lifespan of EVs. Effective thermal management systems are crucial to maintain the EV battery pack within an optimal temperature range, preventing performance degradation and catastrophic failures such as thermal runaway. Among various cooling strategies, liquid cooling has gained prominence due to its superior heat dissipation capabilities. However, designing an efficient liquid cooling system for EV battery packs requires a comprehensive understanding of fluid flow, heat transfer, and electrochemical interactions. In this study, I conduct a detailed numerical investigation into the thermal management performance of a liquid-cooled EV battery pack, focusing on both steady-state and transient operational conditions. The analysis encompasses flow field characteristics, temperature distribution uniformity, and pressure drops, providing insights for optimizing EV battery pack thermal management systems.

The EV battery pack under consideration comprises two battery modules, each consisting of 26 lithium iron phosphate (LiFePO₄) cells, and a liquid cooling plate. The cells are interconnected with thermal conductive silicone to mitigate contact resistance, and the entire assembly is attached to the cooling plate using the same material. The cooling plate, made of AL3003 aluminum, features a complex channel design to facilitate coolant flow. The cooling medium is a mixture of 50% water and 50% ethylene glycol, circulated at a flow rate of 5 L/min with an inlet temperature of 18°C. To evaluate the thermal management performance, I developed a coupled thermal-flow-electrochemical model using computational fluid dynamics (CFD) and multiphysics simulation tools. The model accounts for heat generation within the EV battery pack, conductive heat transfer through solid components, convective cooling by the liquid, and electrochemical reactions during charging and discharging cycles.
The numerical simulation is based on the finite volume method, employing the standard k-ε turbulence model to capture the turbulent flow within the cooling channels. The governing equations for fluid flow and heat transfer include the continuity, momentum, and energy equations. For steady-state analysis, the heat generation rate of each cell is assumed to be constant, corresponding to discharge rates of 0.5 C and 1.0 C. In transient analysis, a detailed electrochemical model is integrated to compute real-time heat generation during charge-discharge cycles. The electrochemical model utilizes a one-dimensional isothermal approach, incorporating charge conservation and material properties for the LiFePO₄ cathode, graphite anode, and electrolyte. The model validation is performed by comparing simulated discharge voltage-capacity curves with experimental data, ensuring accuracy in predicting the behavior of the EV battery pack.
Key boundary conditions and material properties are summarized in the following tables to provide a clear overview of the simulation setup.
| Parameter | Value |
|---|---|
| Ambient Temperature | 25°C |
| Cell Heat Generation (0.5 C) | 12.5 W per cell (4,838.4 W/m³) |
| Cell Heat Generation (1.0 C) | 25 W per cell (9,676.8 W/m³) |
| Cell Density | 2,000 kg/m³ |
| Cell Specific Heat Capacity | 967 J/(kg·K) |
| Cell Thermal Conductivity (Anisotropic) | 20 × 20 × 5 W/(m·K) |
| Thermal Silicone Conductivity | 1.2 W/(m·K) |
| Coolant Volume Flow Rate | 5 L/min |
| Coolant Inlet Temperature | 18°C |
| Coolant Density (50% water + 50% EG) | 1,066 kg/m³ |
| Coolant Dynamic Viscosity | 0.00315 Pa·s |
| Coolant Specific Heat | 3,338 J/(kg·K) |
| Coolant Thermal Conductivity | 0.39 W/(m·K) |
| Cooling Plate Thermal Conductivity (Aluminum) | 202.4 W/(m·K) |
| External Convective Heat Transfer Coefficient | 5 W/(m²·K) |
| Contact Thermal Resistance | 0.0001 (m²·K)/W |
The computational domain is discretized using unstructured meshes, with local refinement applied to the fluid domain and boundary layers to accurately resolve flow and thermal gradients. A grid independence study is conducted to ensure solution accuracy, as shown in Table 2. The final mesh consists of approximately 12.3 million cells, balancing computational efficiency and precision.
| Number of Grid Cells | Inlet-Outlet Temperature Difference (°C) | Inlet-Outlet Pressure Drop (kPa) |
|---|---|---|
| 4,826,038 | 2.27 | 12.27 |
| 8,457,454 | 2.20 | 11.98 |
| 11,626,786 | 2.15 | 11.83 |
| 12,300,100 | 2.13 | 11.82 |
| 25,215,104 | 2.13 | 11.82 |
The governing equations for fluid flow and heat transfer are expressed in LaTeX format below. The continuity, momentum, and energy equations form the basis of the CFD simulation:
Continuity equation: $$\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{v}) = 0$$
Momentum equation: $$\frac{\partial (\rho \mathbf{v})}{\partial t} + \nabla \cdot (\rho \mathbf{v} \mathbf{v}) = -\nabla p + \nabla \cdot \boldsymbol{\tau} + \mathbf{f}$$ where $\boldsymbol{\tau}$ is the stress tensor and $\mathbf{f}$ represents body forces.
Energy equation: $$\frac{\partial (\rho e)}{\partial t} + \nabla \cdot (\rho e \mathbf{v}) = -\nabla \cdot \mathbf{q} + \dot{q}$$ with $\mathbf{q} = -k \nabla T$ for conductive heat flux and $\dot{q}$ as the heat source term.
For turbulent flow, the standard k-ε model is employed, involving two additional transport equations for turbulent kinetic energy $k$ and its dissipation rate $\varepsilon$: $$\frac{\partial (\rho k)}{\partial t} + \nabla \cdot (\rho k \mathbf{v}) = \nabla \cdot \left[ \left( \mu + \frac{\mu_t}{\sigma_k} \right) \nabla k \right] + G_k – \rho \varepsilon$$ $$\frac{\partial (\rho \varepsilon)}{\partial t} + \nabla \cdot (\rho \varepsilon \mathbf{v}) = \nabla \cdot \left[ \left( \mu + \frac{\mu_t}{\sigma_\varepsilon} \right) \nabla \varepsilon \right] + C_{1\varepsilon} \frac{\varepsilon}{k} G_k – C_{2\varepsilon} \rho \frac{\varepsilon^2}{k}$$ where $\mu_t = \rho C_\mu \frac{k^2}{\varepsilon}$, and $G_k$ represents the generation of turbulent kinetic energy due to mean velocity gradients.
In the electrochemical model for the EV battery pack, the charge conservation in the electrolyte phase is described using Ohm’s law with effective conductivity: $$\sigma_{m,\text{eff}} = \sigma_m \varepsilon^\gamma$$ where $\sigma_m$ is the material conductivity, $\varepsilon$ is porosity, and $\gamma$ is the Bruggeman coefficient (taken as 3.3). The heat generation rate during charge-discharge cycles is computed based on the electrochemical reactions, incorporating irreversible and reversible heat effects.
The steady-state analysis reveals significant insights into the thermal and flow behavior of the EV battery pack. Under a 0.5 C discharge rate, the maximum temperature within the EV battery pack reaches 26.65°C, with a temperature difference of 7.52°C among the 54 cells. At 1.0 C, the maximum temperature increases to 33.48°C, and the temperature difference expands to 13.40°C. This highlights the temperature non-uniformity exacerbated by higher discharge rates, which can adversely affect the performance and longevity of the EV battery pack. The liquid cooling plate exhibits similar trends, with maximum temperatures of 21.16°C at 0.5 C and 23.57°C at 1.0 C, and corresponding temperature inhomogeneities of 3.16°C and 5.57°C, respectively. The coolant temperature distribution shows localized hot spots due to flow recirculation in areas with sudden cross-sectional changes.
The flow field analysis indicates that the coolant velocity is highest near the inlet and outlet regions, decreasing as the flow spreads through the wider channels. Recirculation zones are observed where the flow area expands abruptly, leading to reduced cooling efficiency and higher local temperatures. The pressure distribution within the cooling plate is non-uniform, with a total pressure drop of 11.82 kPa between the inlet and outlet. This substantial pressure loss is primarily concentrated at the inlet and outlet connections, contributing to increased pumping power requirements for the EV battery pack thermal management system. The pressure at various cross-sections is summarized in Table 3, demonstrating the gradual pressure decrease along the flow path.
| Cross-Section | Pressure (kPa) |
|---|---|
| A (Inlet) | 11.82 |
| B | 9.71 |
| C | 7.46 |
| D | 7.10 |
| E | 6.46 |
| F | 2.25 |
| G (Outlet) | 0.00 |
Transient analysis provides a dynamic perspective on the EV battery pack thermal behavior. During charge-discharge cycles at 0.5 C, the heat generation rate peaks at approximately 4.93 kW/m³ during discharge, while during charging, it reaches about 3 kW/m³. For the 1.0 C condition, the heat generation rates are significantly higher, at 17.5 kW/m³ during discharge and 14 kW/m³ during charging. The EV battery pack temperature varies accordingly, with the highest temperature reaching 22.32°C at 0.5 C and 35.17°C at 1.0 C during discharge phases. The average temperature follows similar patterns, underscoring the impact of operational intensity on thermal management demands. The temperature non-uniformity in the EV battery pack is also time-dependent, with maximum differences of 2.91°C at 0.5 C and 10.25°C at 1.0 C at peak temperature instances.
The liquid cooling plate’s transient temperature response mirrors that of the EV battery pack, with maximum temperatures of 19.91°C at 0.5 C and 27.48°C at 1.0 C. The temperature differences across the plate are 1.91°C and 9.48°C, respectively, indicating that higher discharge rates not only elevate temperatures but also exacerbate spatial inhomogeneities. These findings emphasize the need for adaptive cooling strategies in EV battery pack designs to maintain thermal uniformity under varying loads.
To quantify the thermal performance, the temperature non-uniformity index $\Delta T_{\text{max}}$ can be defined as the difference between the maximum and minimum temperatures in the EV battery pack. For steady-state conditions, this index increases with discharge rate, as shown by the relation: $$\Delta T_{\text{max}} = \alpha \cdot C_{\text{rate}} + \beta$$ where $\alpha$ and $\beta$ are coefficients derived from simulation data. Similarly, the pressure drop $\Delta p$ across the cooling plate is a critical parameter for pump sizing, expressed as: $$\Delta p = \frac{1}{2} f \rho v^2 L / D_h + \sum K \frac{1}{2} \rho v^2$$ where $f$ is the friction factor, $v$ is the average velocity, $L$ is the channel length, $D_h$ is the hydraulic diameter, and $K$ represents minor loss coefficients at inlet and outlet.
In conclusion, this numerical study comprehensively analyzes the thermal management performance of a liquid-cooled EV battery pack. The results demonstrate that the cooling plate design leads to significant pressure drops, primarily at inlet and outlet regions, which can increase energy consumption for coolant circulation. Temperature non-uniformity within the EV battery pack is pronounced and worsens with higher discharge rates, potentially affecting cell balance and longevity. Transient analysis reveals that heat generation rates and temperature peaks are more severe during discharge cycles, necessitating robust cooling capacity. These insights underscore the importance of optimizing channel geometry, flow distribution, and material properties to enhance the efficiency and reliability of EV battery pack thermal management systems. Future work could explore advanced cooling structures, such as microchannels or hybrid cooling approaches, to further improve thermal uniformity and reduce pressure losses in EV battery packs.
