全文获取类型
收费全文 | 16622篇 |
免费 | 632篇 |
国内免费 | 226篇 |
专业分类
管理学 | 1919篇 |
劳动科学 | 2篇 |
民族学 | 62篇 |
人才学 | 3篇 |
人口学 | 314篇 |
丛书文集 | 804篇 |
理论方法论 | 360篇 |
综合类 | 7812篇 |
社会学 | 533篇 |
统计学 | 5671篇 |
出版年
2024年 | 122篇 |
2023年 | 156篇 |
2022年 | 239篇 |
2021年 | 273篇 |
2020年 | 422篇 |
2019年 | 520篇 |
2018年 | 565篇 |
2017年 | 684篇 |
2016年 | 566篇 |
2015年 | 562篇 |
2014年 | 896篇 |
2013年 | 2150篇 |
2012年 | 1324篇 |
2011年 | 1031篇 |
2010年 | 854篇 |
2009年 | 845篇 |
2008年 | 920篇 |
2007年 | 902篇 |
2006年 | 826篇 |
2005年 | 704篇 |
2004年 | 586篇 |
2003年 | 493篇 |
2002年 | 429篇 |
2001年 | 378篇 |
2000年 | 236篇 |
1999年 | 181篇 |
1998年 | 96篇 |
1997年 | 100篇 |
1996年 | 75篇 |
1995年 | 62篇 |
1994年 | 45篇 |
1993年 | 40篇 |
1992年 | 38篇 |
1991年 | 40篇 |
1990年 | 24篇 |
1989年 | 18篇 |
1988年 | 15篇 |
1987年 | 10篇 |
1986年 | 7篇 |
1985年 | 14篇 |
1984年 | 8篇 |
1983年 | 9篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 4篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
2.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000. 相似文献
3.
Kimmo Eriksson 《Scandinavian Journal of Statistics》2004,31(2):203-216
Abstract. This document presents a survey of the statistical and combinatorial aspects of four areas of comparative genomics: gene order based measures of evolutionary distances between species, construction of phylogenetic trees, detection of horizontal transfer of genes, and detection of ancient whole genome duplications. 相似文献
4.
John S. J. HSU 《Revue canadienne de statistique》1995,23(4):399-410
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem. 相似文献
5.
We used two statistical methods to identify prognostic factors: a log-linear model (logistic and COX regression, based on the notions of linearity and multiplicative relative risk), and the CORICO method (ICOnography of CORrelations) based on the geometric significance of the correlation coefficient. We applied the methods to two different situations (a "case-control study' and a "historical cohort'). We show that the geometric exploratory tool is particularly suited to the analysis of small samples with a large number of variables. It could save time when setting up new study protocols. In this instance, the geometric approach highlighted, without preconceived ideas, the potential role of multihormonality in the course of pituitary adenoma and the unexpected influence of the date of tumour excision on the risk attached to haemorrhage. 相似文献
6.
Jan C. H. van Eijkeren 《Risk analysis》2002,22(1):159-173
A mechanistic model is presented describing the clearance of a compound in a precision-cut liver slice that is incubated in a culture medium. The problem of estimating metabolic rate constants in PBPK models from liver slice experiments is discussed using identifiability analysis. From the identifiability problem analysis, it appears that in addition to the clearance, the compound's free fraction in the slice and the diffusion rate of the exchange of the compound between culture medium and liver slice should be identified. In addition, knowledge of the culture medium volume, the slice volume, the compound's free fraction, and octanol-water-based partition between medium and slice is presupposed. The formal solution for identification is discussed from the perspective of experimental practice. A formally necessary condition for identification is the sampling of parent compound in liver slice or culture medium. However, due to experimental limitations and errors, sampling the parent compound in the slice together with additional sampling of metabolite pooled from the medium and the slice is required for identification in practice. Moreover, it appears that identification results are unreliable when the value of the intrinsic clearance exceeds the value of the diffusion coefficient, a condition to be verified a posteriori. 相似文献
7.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables. 相似文献
8.
Peter J. Robinson 《Risk analysis》1992,12(1):139-148
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments. 相似文献
9.
The authors consider the optimal design of sampling schedules for binary sequence data. They propose an approach which allows a variety of goals to be reflected in the utility function by including deterministic sampling cost, a term related to prediction, and if relevant, a term related to learning about a treatment effect To this end, they use a nonparametric probability model relying on a minimal number of assumptions. They show how their assumption of partial exchangeability for the binary sequence of data allows the sampling distribution to be written as a mixture of homogeneous Markov chains of order k. The implementation follows the approach of Quintana & Müller (2004), which uses a Dirichlet process prior for the mixture. 相似文献
10.
Andris Abakuks 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2007,170(3):841-850
Summary. In New Testament studies, the synoptic problem is concerned with the relationships between the gospels of Matthew, Mark and Luke. In an earlier paper a careful specification in probabilistic terms was set up of Honoré's triple-link model. In the present paper, a modification of Honoré's model is proposed. As previously, counts of the numbers of verbal agreements between the gospels are examined to investigate which of the possible triple-link models appears to give the best fit to the data, but now using the modified version of the model and additional sets of data. 相似文献