Lichen Muqueux : À Propos De 43 Cas
Computing Research Repository (CoRR)(2013)
Abstract
The Soil and Water Assessment Tool (SWAT) and the Hydrologic Simulation Program-Fortran (HSPF) are two river basin simulation models with similar scheme of watershed delineation and functionalities. Both have been selected as modeling tools to support decision and policy making in management of the Illinois River Basin, the United States. This paper reports results calibrating and evaluating SWAT and HSPF model to hydrologic data in the Illinois River Basin, with relative performance of two models in hydrologic simulation and model behaviors under calibration being further compared. In this study, two different calibration approaches, the multi-criteria and the generalized likelihood uncertainty estimation (GLUE) method, were used to quantify uncertainties originated from the use of multi-site discharge observations and the presence of equifinal solutions. It is concluded that both models achieved satisfactory performance after the calibration, and the parameter identification of each model was subject to considerable uncertainties. Furthermore, there exist parameter sets that enable the HSPF model to generate more accurate predictions of the discharges in the main stem of the Illinois River than the SWAT model does, but when the two models were run in un-calibrated mode the distributions of the model fit summary statistics for HSPF observed in the Monte Carlo sampling during GLUE calibration are more varied than those for SWAT with heavier tails on the inferior side and SWAT would have comparable performance to HSPF on average. This finding implies that the accuracy that the HSPF model can achieve in a modeling exercise may have more reliance on the efficacy of the calibration procedure, and the application of SWAT may have some advantage when calibration data are lacking or scarce.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest