首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4720篇
  免费   127篇
  国内免费   29篇
管理学   316篇
民族学   3篇
人口学   41篇
丛书文集   82篇
理论方法论   32篇
综合类   929篇
社会学   75篇
统计学   3398篇
  2024年   1篇
  2023年   27篇
  2022年   46篇
  2021年   32篇
  2020年   77篇
  2019年   159篇
  2018年   176篇
  2017年   285篇
  2016年   151篇
  2015年   109篇
  2014年   140篇
  2013年   1219篇
  2012年   390篇
  2011年   142篇
  2010年   157篇
  2009年   168篇
  2008年   161篇
  2007年   152篇
  2006年   150篇
  2005年   139篇
  2004年   124篇
  2003年   110篇
  2002年   121篇
  2001年   95篇
  2000年   81篇
  1999年   79篇
  1998年   69篇
  1997年   47篇
  1996年   35篇
  1995年   31篇
  1994年   28篇
  1993年   19篇
  1992年   25篇
  1991年   10篇
  1990年   18篇
  1989年   12篇
  1988年   20篇
  1987年   8篇
  1986年   6篇
  1985年   4篇
  1984年   12篇
  1983年   13篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
排序方式: 共有4876条查询结果,搜索用时 15 毫秒
991.
We propose a bivariate Farlie–Gumbel–Morgenstern (FGM) copula model for bivariate meta-analysis, and develop a maximum likelihood estimator for the common mean vector. With the aid of novel mathematical identities for the FGM copula, we derive the expression of the Fisher information matrix. We also derive an approximation formula for the Fisher information matrix, which is accurate and easy to compute. Based on the theory of independent but not identically distributed (i.n.i.d.) samples, we examine the asymptotic properties of the estimator. Simulation studies are given to demonstrate the performance of the proposed method, and a real data analysis is provided to illustrate the method.  相似文献   
992.
This work considers the problem of estimating a quantile function based on different stratified sampling mechanism. First, we develop an estimate for population quantiles based on stratified simple random sampling (SSRS) and extend the discussion for stratified ranked set sampling (SRSS). Furthermore, the asymptotic behavior of the proposed estimators are presented. In addition, we derive an analytical expression for the optimal allocation under both sampling schemes. Simulation studies are designed to examine the performance of the proposed estimators under varying distributional assumptions. The efficiency of the proposed estimates is further illustrated by analyzing a real data set from CHNS.  相似文献   
993.
Inverse sampling is an appropriate design for the second phase of capture-recapture experiments which provides an exactly unbiased estimator of the population size. However, the sampling distribution of the resulting estimator tends to be highly right skewed for small recapture samples, so, the traditional Wald-type confidence intervals appear to be inappropriate. The objective of this paper is to study the performance of interval estimators for the population size under inverse recapture sampling without replacement. To this aim, we consider the Wald-type, the logarithmic transformation-based, the Wilson score, the likelihood ratio and the exact methods. Also, we propose some bootstrap confidence intervals for the population size, including the with-replacement bootstrap (BWR), the without replacement bootstrap (BWO), and the Rao–Wu’s rescaling method. A Monte Carlo simulation is employed to evaluate the performance of suggested methods in terms of the coverage probability, error rates and standardized average length. Our results show that the likelihood ratio and exact confidence intervals are preferred to other competitors, having the coverage probabilities close to the desired nominal level for any sample size, with more balanced error rate for exact method and shorter length for likelihood ratio method. It is notable that the BWO and Rao–Wu’s rescaling methods also may provide good intervals for some situations, however, those coverage probabilities are not invariant with respect to the population arguments, so one must be careful to use them.  相似文献   
994.
In this paper, we suggest a new randomized response model useful for collecting information on quantitative sensitive variables such as drug use and income. The resultant estimator has been found to be better than the usual additive randomized response model. An interesting feature of the proposed model is that it is free from the known parameters of the scrambling variable unlike the additive model due to Himmelfarb and Edgell [S. Himmelfarb and S.E. Edgell, Additive constant model: a randomized response technique for eliminating evasiveness to quantitative response questions, Psychol. Bull. 87(1980), 525–530]. Relative efficiency of the proposed model has also been studied with the corresponding competitors. At the end, an application of the proposed model has been discussed.  相似文献   
995.
王慎之 《求是学刊》2001,28(4):41-48
知识经济时代的到来 ,使凝结在社会财富中的科学技术含量日益增加 ,软件随之成为独立的存在 ,并已走向商品化。软件的无形属性 ,不像硬件那样容易触摸 ,所以常常使人忽视其存在 ,形成若干观念误区。软件的总体数量超过硬件 ,是人类历史的进步 ,标志着社会生产力进入了崭新的阶段。文章对知识经济时代的软件商品做了全面考察。当前的重要任务就是从简单的粗放劳动解放出来 ,走向高效细腻的智力劳动 ,知识经济时代也就是软件商品时代  相似文献   
996.
Stochastic gradient descent (SGD) provides a scalable way to compute parameter estimates in applications involving large‐scale data or streaming data. As an alternative version, averaged implicit SGD (AI‐SGD) has been shown to be more stable and more efficient. Although the asymptotic properties of AI‐SGD have been well established, statistical inferences based on it such as interval estimation remain unexplored. The bootstrap method is not computationally feasible because it requires to repeatedly resample from the entire data set. In addition, the plug‐in method is not applicable when there is no explicit covariance matrix formula. In this paper, we propose a scalable statistical inference procedure, which can be used for conducting inferences based on the AI‐SGD estimator. The proposed procedure updates the AI‐SGD estimate as well as many randomly perturbed AI‐SGD estimates, upon the arrival of each observation. We derive some large‐sample theoretical properties of the proposed procedure and examine its performance via simulation studies.  相似文献   
997.
We study the association between bone mineral density (BMD) and body mass index (BMI) when contingency tables are constructed from the several U.S. counties, where BMD has three levels (normal, osteopenia and osteoporosis) and BMI has four levels (underweight, normal, overweight and obese). We use the Bayes factor (posterior odds divided by prior odds or equivalently the ratio of the marginal likelihoods) to construct the new test. Like the chi-squared test and Fisher's exact test, we have a direct Bayes test which is a standard test using data from each county. In our main contribution, for each county techniques of small area estimation are used to borrow strength across counties and a pooled test of independence of BMD and BMI is obtained using a hierarchical Bayesian model. Our pooled Bayes test is computed by performing a Monte Carlo integration using random samples rather than Gibbs samples. We have seen important differences among the pooled Bayes test, direct Bayes test and the Cressie-Read test that allows for some degree of sparseness, when the degree of evidence against independence is studied. As expected, we also found that the direct Bayes test is sensitive to the prior specifications but the pooled Bayes test is not so sensitive. Moreover, the pooled Bayes test has competitive power properties, and it is superior when the cell counts are small to moderate.  相似文献   
998.
In this article we propose a variant of the Kaplan-Meier estimator which aims at reducing the bias by adding a bootstrap based correction term to the pertaining cumulative hazard function. For the mean lifetime it is demonstrated in a simulation study that the new estimator also has a smaller variance.  相似文献   
999.
It is known that collinearity among the explanatory variables in generalized linear models (GLMs) inflates the variance of maximum likelihood estimators. To overcome multicollinearity in GLMs, ordinary ridge estimator and restricted estimator were proposed. In this study, a restricted ridge estimator is introduced by unifying the ordinary ridge estimator and the restricted estimator in GLMs and its mean squared error (MSE) properties are discussed. The MSE comparisons are done in the context of first-order approximated estimators. The results are illustrated by a numerical example and two simulation studies are conducted with Poisson and binomial responses.  相似文献   
1000.
Applied statisticians and pharmaceutical researchers are frequently involved in the design and analysis of clinical trials where at least one of the outcomes is binary. Treatments are judged by the probability of a positive binary response. A typical example is the noninferiority trial, where it is tested whether a new experimental treatment is practically not inferior to an active comparator with a prespecified margin δ. Except for the special case of δ = 0, no exact conditional test is available although approximate conditional methods (also called second‐order methods) can be applied. However, in some situations, the approximation can be poor and the logical argument for approximate conditioning is not compelling. The alternative is to consider an unconditional approach. Standard methods like the pooled z‐test are already unconditional although approximate. In this article, we review and illustrate unconditional methods with a heavy emphasis on modern methods that can deliver exact, or near exact, results. For noninferiority trials based on either rate difference or rate ratio, our recommendation is to use the so‐called E‐procedure, based on either the score or likelihood ratio statistic. This test is effectively exact, computationally efficient, and respects monotonicity constraints in practice. We support our assertions with a numerical study, and we illustrate the concepts developed in theory with a clinical example in pulmonary oncology; R code to conduct all these analyses is available from the authors.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号