首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8497篇
  免费   708篇
  国内免费   25篇
管理学   1257篇
民族学   12篇
人才学   1篇
人口学   64篇
丛书文集   120篇
理论方法论   869篇
综合类   1313篇
社会学   1659篇
统计学   3935篇
  2024年   1篇
  2023年   22篇
  2022年   24篇
  2021年   120篇
  2020年   227篇
  2019年   436篇
  2018年   342篇
  2017年   599篇
  2016年   463篇
  2015年   460篇
  2014年   505篇
  2013年   1706篇
  2012年   767篇
  2011年   448篇
  2010年   425篇
  2009年   320篇
  2008年   371篇
  2007年   276篇
  2006年   241篇
  2005年   239篇
  2004年   215篇
  2003年   172篇
  2002年   151篇
  2001年   184篇
  2000年   138篇
  1999年   62篇
  1998年   40篇
  1997年   56篇
  1996年   31篇
  1995年   28篇
  1994年   20篇
  1993年   20篇
  1992年   19篇
  1991年   9篇
  1990年   13篇
  1989年   7篇
  1988年   11篇
  1987年   6篇
  1986年   6篇
  1985年   7篇
  1984年   10篇
  1983年   9篇
  1982年   6篇
  1980年   5篇
  1979年   3篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1975年   4篇
排序方式: 共有9230条查询结果,搜索用时 0 毫秒
91.
This article investigates the impact of information discrepancy between a drop‐shipper and an online retailer on the drop‐shipping supply chain performance. The inventory information misalignment between them contributes to the failure of order fulfillment and demand satisfaction, and hence the associated penalties are incurred. In this article, we first analyze the penalties of ignoring such information discrepancy on both the drop‐shipper and the online retailer. We then assess the impact of information discrepancy on both parties when the drop‐shipper understands the existence of the information discrepancy but is not able to eliminate the errors. The numerical experiments indicate that both parties can have significant amount of the percentage cost reductions if the information discrepancy can be eliminated, and the potential savings are substantial especially when the errors have large variability. Furthermore, we observe that the online retailer is more vulnerable to information discrepancy than the drop‐shipper, and the drop‐shipper is likely to suffer from the online retailer's underestimation of the physical inventory level more than the problem of its overestimation. Moreover, even if eliminating errors is not possible, both parties could still benefit from taking the possibility of errors into consideration in decision making.  相似文献   
92.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   
93.
Abstract

Under non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely.  相似文献   
94.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   
95.
This paper deals with a longitudinal semi‐parametric regression model in a generalised linear model setup for repeated count data collected from a large number of independent individuals. To accommodate the longitudinal correlations, we consider a dynamic model for repeated counts which has decaying auto‐correlations as the time lag increases between the repeated responses. The semi‐parametric regression function involved in the model contains a specified regression function in some suitable time‐dependent covariates and a non‐parametric function in some other time‐dependent covariates. As far as the inference is concerned, because the non‐parametric function is of secondary interest, we estimate this function consistently using the independence assumption‐based well‐known quasi‐likelihood approach. Next, the proposed longitudinal correlation structure and the estimate of the non‐parametric function are used to develop a semi‐parametric generalised quasi‐likelihood approach for consistent and efficient estimation of the regression effects in the parametric regression function. The finite sample performance of the proposed estimation approach is examined through an intensive simulation study based on both large and small samples. Both balanced and unbalanced cluster sizes are incorporated in the simulation study. The asymptotic performances of the estimators are given. The estimation methodology is illustrated by reanalysing the well‐known health care utilisation data consisting of counts of yearly visits to a physician by 180 individuals for four years and several important primary and secondary covariates.  相似文献   
96.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
97.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
98.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
99.
关于上市公司利润操控的相关研究主要是将净利润作为一个事关操控的敏感指标,但在监管与被监管双方努力的博弈下,其可被操纵的潜力已被充分挖掘。从实证角度关注上市公司财务操纵的其他可操作途径,其中就包括一些看似合理的变相财务操纵手段。按照2001年中国证监会公布的《上市公司行业分类指引》,把我国上市企业分为13个行业,对不同行业内企业2003—2013年部分主观可操控的财务指标与其所在行业的平均市盈率进行格兰杰因果检验和回归分析,结果表明,一些行业内企业的部分财务指标与其所在行业的平均市盈率存在领先与滞后关系,并且影响作用显著。  相似文献   
100.
平稳性检验方法的有效性研究   总被引:2,自引:1,他引:1  
平稳性检验是时间序列分析的重要研究内容,现有检验方法的性能缺乏系统的比较分析。文章从样本长度的视角研究平稳性检验方法的性能,采用ADF检验、PP检验、KPSS检验和LMC检验四种方法展开实证研究。仿真实验结果表明:时间序列数据长度会对检验方法的准确率产生明显的影响,数据长度较小时检验准确率偏低;数据长度增大时可以提升检验方法的准确率,但仍未能达到100%的上限值。当样本长度较小时,这些方法的检验统计量的渐进分布难以满足,因此其实际检验效果值得探究。样本长度是有限的,因此渐进分布检验方式的改进空间有限,新的检验方式值得探究。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号