首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2563篇
  免费   38篇
  国内免费   3篇
管理学   248篇
人口学   11篇
丛书文集   10篇
理论方法论   32篇
综合类   64篇
社会学   24篇
统计学   2215篇
  2024年   19篇
  2023年   46篇
  2022年   30篇
  2021年   46篇
  2020年   60篇
  2019年   107篇
  2018年   121篇
  2017年   218篇
  2016年   102篇
  2015年   81篇
  2014年   105篇
  2013年   548篇
  2012年   229篇
  2011年   83篇
  2010年   69篇
  2009年   86篇
  2008年   66篇
  2007年   76篇
  2006年   68篇
  2005年   61篇
  2004年   58篇
  2003年   36篇
  2002年   34篇
  2001年   28篇
  2000年   37篇
  1999年   30篇
  1998年   30篇
  1997年   21篇
  1996年   9篇
  1995年   8篇
  1994年   15篇
  1993年   7篇
  1992年   12篇
  1991年   13篇
  1990年   6篇
  1989年   5篇
  1988年   7篇
  1987年   3篇
  1986年   4篇
  1985年   4篇
  1984年   2篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1975年   2篇
排序方式: 共有2604条查询结果,搜索用时 15 毫秒
91.
In this article, the quality of data produced by national statistical institutes and by governmental institutions is considered. In particular, the problem of measurement error is analyzed and an integrated Bayesian network decision support system based on non-parametric Bayesian networks is proposed for its detection and correction. Non-parametric Bayesian networks are graphical models expressing dependence structure via bivariate copulas associated to the edges of the graph. The network structure and the misreport probability are estimated using a validation sample. The Bayesian network model is proposed to decide: (i) which records have to be corrected; (ii) the kind and amount of correction to be adopted. The proposed correction procedure is applied to the Banca d’Italia Survey on Household Income and Wealth and, specifically, the bond amounts are analyzed. Finally, the sensitivity of the conditional distribution of the true value random variable given the observed one to different evidence configurations is studied.  相似文献   
92.
People tend to acquire more information while making their decisions than a rational and risk-neutral benchmark would predict. We conduct a carefully designed experiment to derive five plausible reasons for pre-decision information overpurchasing. The results show that overpurchasing of information can be almost entirely explained by systematic information processing errors (misestimation or incorrect Bayesian updating), possibly caused by biased intuitive decision processes. Other factors, such as overoptimism about the validity of the new information, risk aversion, ambiguity aversion, and curiosity about (irrelevant) information, play at most a minor role. Our results imply that information overacquisitions are mainly driven by the overestimation of the usefulness of additional information.  相似文献   
93.
Social networks describe the relationships and interactions among a group of individuals. In many peer relationships, individuals tend to associate more often with some members than others, forming subgroups or clusters. Subgroup structure varies across networks; subgroups may be insular, appearing distinct and isolated from one another, or subgroups may be so integrated that subgroup structure is not visually apparent, and there are numerous ways of quantifying these types of structures. We propose a new model that relates the amount of subgroup integration to network attributes, building on the mixed membership stochastic blockmodel (Airoldi et al., 2008) and subsequent work by Sweet and Zheng (2017) and Sweet et al. (2014). We explore some of the operating characteristics of this model with simulated data and apply this model to determine the relationship between teachers’ instructional practices and their classrooms’ peer network subgroup structure.  相似文献   
94.
The Bayesian analysis based on the partial likelihood for Cox's proportional hazards model is frequently used because of its simplicity. The Bayesian partial likelihood approach is often justified by showing that it approximates the full Bayesian posterior of the regression coefficients with a diffuse prior on the baseline hazard function. This, however, may not be appropriate when ties exist among uncensored observations. In that case, the full Bayesian and Bayesian partial likelihood posteriors can be much different. In this paper, we propose a new Bayesian partial likelihood approach for many tied observations and justify its use.  相似文献   
95.
Real lifetime data are never precise numbers but more or less non-precise, also called fuzzy. This kind of imprecision is connected with all measurement results of continuous variables, therefore also with time observations. Imprecision is different from errors and variability. Therefore estimation methods for reliability characteristics have to be adapted to the situation of fuzzy lifetimes in order to obtain realistic results.  相似文献   
96.
Certain motor vehicle safety standards stipulate a collision test speed and a set of performance criteria that vehicles must satisfy during or after the collision test. For example, Federal Motor Vehicle Safety Standard 301 requires a 30 mile per hour (mph) barrier collision and specifies a certain maximum allowable limit on the total spillage of fuel. Vehicle designs are required to meet this standard; however, when collision tests are conducted at speeds higher than the standard, vehicles do not always satisfy the performance criteria. This paper develops a mathematical model for estimating the probability of meeting the standard by using a Bayesian framework to incorporate engineering judgment with collision test results. The model is based on the idea that there are random features to a vehicle's ability to meet performance standards in a collision, especially at such elevated speeds. Example calculations are included to illustrate the estimation of the probability of meeting the standard and to compare it with a maximum likelihood approach.  相似文献   
97.
The article focuses on the application of the Bayesian networks (BN) technique to problems of personalized medicine. The simple (intuitive) algorithm of BN optimization with respect to the number of nodes using naive network topology is developed. This algorithm allows to increase the BN prediction quality and to identify the most important variables of the network. The parallel program implementing the algorithm has demonstrated good scalability with an increase in the computational cores number, and it can be applied to the large patients database containing thousands of variables. This program is applied for the prediction for the unfavorable outcome of coronary artery disease (CAD) for patients who survived the acute coronary syndrome (ACS). As a result, the quality of the predictions of the investigated networks was significantly improved and the most important risk factors were detected. The significance of the tumor necrosis factor-alpha gene polymorphism for the prediction of the unfavorable outcome of CAD for patients survived after ACS was revealed for the first time.  相似文献   
98.
In this paper, we investigate the commonality of nonparametric component functions among different quantile levels in additive regression models. We propose two fused adaptive group Least Absolute Shrinkage and Selection Operator penalties to shrink the difference of functions between neighbouring quantile levels. The proposed methodology is able to simultaneously estimate the nonparametric functions and identify the quantile regions where functions are unvarying, and thus is expected to perform better than standard additive quantile regression when there exists a region of quantile levels on which the functions are unvarying. Under some regularity conditions, the proposed penalised estimators can theoretically achieve the optimal rate of convergence and identify the true varying/unvarying regions consistently. Simulation studies and a real data application show that the proposed methods yield good numerical results.  相似文献   
99.
In the life test, predicting higher failure times than the largest failure time of the observed is an important issue. Although the Rayleigh distribution is a suitable model for analyzing the lifetime of components that age rapidly over time because its failure rate function is an increasing linear function of time, the inference for a two-parameter Rayleigh distribution based on upper record values has not been addressed from the Bayesian perspective. This paper provides Bayesian analysis methods by proposing a noninformative prior distribution to analyze survival data, using a two-parameter Rayleigh distribution based on record values. In addition, we provide a pivotal quantity and an algorithm based on the pivotal quantity to predict the behavior of future survival records. We show that the proposed method is superior to the frequentist counterpart in terms of the mean-squared error and bias through Monte carlo simulations. For illustrative purposes, survival data on lung cancer patients are analyzed, and it is proved that the proposed model can be a good alternative when prior information is not given.  相似文献   
100.
This research was motivated by our goal to design an efficient clinical trial to compare two doses of docosahexaenoic acid supplementation for reducing the rate of earliest preterm births (ePTB) and/or preterm births (PTB). Dichotomizing continuous gestational age (GA) data using a classic binomial distribution will result in a loss of information and reduced power. A distributional approach is an improved strategy to retain statistical power from the continuous distribution. However, appropriate distributions that fit the data properly, particularly in the tails, must be chosen, especially when the data are skewed. A recent study proposed a skew-normal method. We propose a three-component normal mixture model and introduce separate treatment effects at different components of GA. We evaluate operating characteristics of mixture model, beta-binomial model, and skew-normal model through simulation. We also apply these three methods to data from two completed clinical trials from the USA and Australia. Finite mixture models are shown to have favorable properties in PTB analysis but minimal benefit for ePTB analysis. Normal models on log-transformed data have the largest bias. Therefore we recommend finite mixture model for PTB study. Either finite mixture model or beta-binomial model is acceptable for ePTB study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号