首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4381篇
  免费   942篇
管理学   1108篇
民族学   5篇
人口学   49篇
理论方法论   819篇
综合类   22篇
社会学   1627篇
统计学   1693篇
  2023年   1篇
  2022年   4篇
  2021年   96篇
  2020年   175篇
  2019年   371篇
  2018年   232篇
  2017年   408篇
  2016年   358篇
  2015年   353篇
  2014年   376篇
  2013年   759篇
  2012年   475篇
  2011年   269篇
  2010年   268篇
  2009年   163篇
  2008年   207篇
  2007年   105篇
  2006年   106篇
  2005年   111篇
  2004年   117篇
  2003年   84篇
  2002年   89篇
  2001年   92篇
  2000年   74篇
  1999年   4篇
  1998年   6篇
  1997年   1篇
  1996年   4篇
  1995年   4篇
  1994年   1篇
  1993年   1篇
  1992年   2篇
  1991年   1篇
  1988年   2篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
  1975年   1篇
排序方式: 共有5323条查询结果,搜索用时 31 毫秒
181.
182.
In 2008, Industry Canada auctioned 105 MHz of spectrum to a group of bidders that included incumbents and potential new entrants into the Canadian mobile phone market, raising $4.25 billion. In an effort to promote new entry, 40 MHz of spectrum was set‐aside for new entrants. In order to estimate the implicit cost of the set‐aside provision, we estimate the parameters of the bidders' profit function via a maximum match estimator based on the notion of pairwise stability in matches. We find that all telecommunications firms valued both geographic complementarities across auction licenses as well as absolute spectrum. Under a reasonable alternative scenario, our results indicate that the set‐aside led to a total profit loss of approximately 10%.  相似文献   
183.
This article investigates the impact of information discrepancy between a drop‐shipper and an online retailer on the drop‐shipping supply chain performance. The inventory information misalignment between them contributes to the failure of order fulfillment and demand satisfaction, and hence the associated penalties are incurred. In this article, we first analyze the penalties of ignoring such information discrepancy on both the drop‐shipper and the online retailer. We then assess the impact of information discrepancy on both parties when the drop‐shipper understands the existence of the information discrepancy but is not able to eliminate the errors. The numerical experiments indicate that both parties can have significant amount of the percentage cost reductions if the information discrepancy can be eliminated, and the potential savings are substantial especially when the errors have large variability. Furthermore, we observe that the online retailer is more vulnerable to information discrepancy than the drop‐shipper, and the drop‐shipper is likely to suffer from the online retailer's underestimation of the physical inventory level more than the problem of its overestimation. Moreover, even if eliminating errors is not possible, both parties could still benefit from taking the possibility of errors into consideration in decision making.  相似文献   
184.
Process regression methodology is underdeveloped relative to the frequency with which pertinent data arise. In this article, the response-190 is a binary indicator process representing the joint event of being alive and remaining in a specific state. The process is indexed by time (e.g., time since diagnosis) and observed continuously. Data of this sort occur frequently in the study of chronic disease. A general area of application involves a recurrent event with non-negligible duration (e.g., hospitalization and associated length of hospital stay) and subject to a terminating event (e.g., death). We propose a semiparametric multiplicative model for the process version of the probability of being alive and in the (transient) state of interest. Under the proposed methods, the regression parameter is estimated through a procedure that does not require estimating the baseline probability. Unlike the majority of process regression methods, the proposed methods accommodate multiple sources of censoring. In particular, we derive a computationally convenient variant of inverse probability of censoring weighting based on the additive hazards model. We show that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulations demonstrate that our estimators have good finite sample performance. We apply our method to national end-stage liver disease data. The Canadian Journal of Statistics 48: 222–237; 2020 © 2019 Statistical Society of Canada  相似文献   
185.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   
186.
Abstract

Under non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely.  相似文献   
187.
Abstract

In literature, Lindley distribution is considered as an alternative to exponential distribution to fit lifetime data. In the present work, a Lindley step-stress model with independent causes of failure is proposed. An algorithm to generate random samples from the proposed model under type 1 censoring scheme is developed. Point and interval estimation of the model parameters is carried out using maximum likelihood method and percentile bootstrap approach. To understand the effectiveness of the resulting estimates, numerical illustration is provided based on simulated and real-life data sets.  相似文献   
188.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   
189.
This paper deals with a longitudinal semi‐parametric regression model in a generalised linear model setup for repeated count data collected from a large number of independent individuals. To accommodate the longitudinal correlations, we consider a dynamic model for repeated counts which has decaying auto‐correlations as the time lag increases between the repeated responses. The semi‐parametric regression function involved in the model contains a specified regression function in some suitable time‐dependent covariates and a non‐parametric function in some other time‐dependent covariates. As far as the inference is concerned, because the non‐parametric function is of secondary interest, we estimate this function consistently using the independence assumption‐based well‐known quasi‐likelihood approach. Next, the proposed longitudinal correlation structure and the estimate of the non‐parametric function are used to develop a semi‐parametric generalised quasi‐likelihood approach for consistent and efficient estimation of the regression effects in the parametric regression function. The finite sample performance of the proposed estimation approach is examined through an intensive simulation study based on both large and small samples. Both balanced and unbalanced cluster sizes are incorporated in the simulation study. The asymptotic performances of the estimators are given. The estimation methodology is illustrated by reanalysing the well‐known health care utilisation data consisting of counts of yearly visits to a physician by 180 individuals for four years and several important primary and secondary covariates.  相似文献   
190.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号