Thứ Ba, 19 tháng 2, 2019

Waching daily Feb 19 2019

For more infomation >> Nuril feat Dimas - Kini Dan Nanti [Official Music Video] - Duration: 5:44.

-------------------------------------------

PV1x_2017_1.0_Course_Introduction-video - Duration: 9:33.

Hello, my name is Arno Smets, I am professor in the Photovoltaics Materials and Devices group

at TU Delft.

Welcome to the first course in our Micromaster series: Solar Energy Engineering.

This first course is called Photovoltaic Energy Conversion.

I will introduce you to the first course in a series of courses that complete our Micromasters

for Solar Energy with a focus on Photovoltaics.

Photovoltaics refers to a field that deals with the conversion of solar energy

directly into electricity.

This video serves as an overview of the course to let you know what to expect in the coming weeks

as you become an expert on the Physics and Engineering of Photovoltaic Energy Conversion.

In this course you will learn all of the fundamental physics behind photovoltaic energy conversion.

Let me first introduce the lecturers for this course.

I have already introduced myself, Professor Arno Smets, and I will be accompanied by

Professor Miro Zeman and Associate Professor Rene van Swaaij.

All three lecturers are Professors at the Delft University of Technology

and work in the Photovoltaic Materials and Devices Group.

Our group has expertise in teaching and research in all aspects of photovoltaics;

from fundamentals, to device fabrication and full system design.

The course Photovoltaic Energy Conversion is split into four parts.

This first week serves as an introduction to the field of photovoltaics that deals

with the conversion of light directly into electricity.

The course then really starts with Semiconductor Physics.

The reason why we shall spend significant time on the topic of semiconductor physics

is that the actual energy conversion takes place in semiconductor materials.

Understanding the relevant physical properties of semiconductors is of utmost importance

to any photovoltaic engineer.

This part will span four weeks and will be the most challenging part of the course

from a physics perspective.

Then we will spend two weeks on Light Management.

This part of the course will focus on optics concepts and light trapping strategies

in order to maximize light absorption in a solar cell.

The final section of this course serves as a wrap-up.

We will look at all the concepts given in the course so far and find out how they result in Electrical Losses.

This last week will give you the tools needed as a photovoltaic expert to properly design

and engineer solar cells.

Let's go through all of these four sections in a bit more detail.

In our introduction, we will tackle three main topics:

energy in general, energy provided by the sun and the solar cell.

By the end of this introductory week you will be able to explain what energy really means

from a physics perspective and how we can convert incoming energy from the sun

into useful electrical energy using solar cells.

In the introduction we will first take a look at the Sun and the spectrum of solar radiation.

What are its properties?

What kind of energy does it provide?

This is our fuel source and we as photovoltaic engineers must understand it as best we can

before designing devices and PV systems to harness its vast energy.

We will then look at the very basic concepts of a solar cell.

Why can a solar cell convert light into electricity?

What are the basic processes that are responsible for this effect?

How is a solar cell built up?

You will get a taste of this in the introduction week, but get ready to dive deeper into the physics afterwards.

Since solar cells are based on semiconductor materials, the next four weeks will focus

on semiconductor physics.

We will answer questions regarding atomic structure of a semiconductor material

and the existence of mobile charge carriers, electrons and holes, in it and how their concentration

can be manipulated by doping.

It is important to know the concentration of electrons and holes in a solar cell

because they are the carriers of electricity.

You will then learn how these charge carriers move around in a solar cell

and how extra charge carriers are generated or annihilated in a semiconductor.

It is necessary to know these processes in the solar cell in order to properly understand

how a solar cell produces electricity.

Based on the physics of semiconductors you will learn about the fundamental

performance limitations of solar cells.

This will be the most challenging part of the course if you do not have a background

in electrical engineering or physics.

However, if you stick through this section you will develop a deep understanding

on the fundamentals of photovoltaics.

After semiconductor physics we will move to light management of a solar cell.

In addition to the semiconductor physics necessary to understand solar cell operation,

you also need to understand how the energy of solar radiation is best utilized in a solar cell.

The most basic idea of light management is how to capture the most light

with the least amount of material in a solar cell.

More light means more energy and less material means lower cost.

We will start with the fundamental optics concepts necessary to understand how to trap light.

Then we will explore some strategies that are employed by solar cell designers

to trap light in the absorber layer of solar cells.

The final week of photovoltaic energy conversion is devoted to electrical losses in a solar cell.

By this point in the course you will be familiar with how the solar cell works

and how to get most of the light into a solar cell and excite electrons in it.

Now it is necessary to understand how to deal with the photo-generated electrons.

Here we explain what aspects of a solar cell can reduce its electrical performance

in terms of voltage and current that the solar cell can deliver.

Finally, we will explore some engineering tricks that can be used to reduce these losses.

We strongly recommend following along in this course with our textbook:

Solar Energy: Physics and Engineering of Photovoltaic Conversion Technologies and Systems.

This textbook is available as a free e-book for any e-reader on Amazon.

Hardcopies are also available for purchase at amazon.com as well as other retailers.

Now the assessment.

Following most of the video lectures, you will find exercises to apply your new gained knowledge.

These exercises are meant to test and improve your understanding of the concepts discussed in the videos,

so try the exercises and don't be afraid to make a mistake!

All the weekly exercises combined count for 15% toward your final grade.

The practice exam counts towards 5% of your final grade and is meant for students to practice

and get familiar with the format of the proctored exam.

The proctored exam is only available to ID-verified students and counts for 80% towards your final grade.

You can qualify for the PV1x verified certificate by achieving a minimum final score of 65%.

Besides the regular content, PV1x also features a live webinar,

in which you can interact live with the course team.

The webinar generally includes a behind the scenes tour at the TU Delft,

an interview with a titan of Industry and an interactive discussion in which the course team treats

the issues introduced by the students.

Look for more information in the "webinar" tab of the "getting started week".

The students will have access to the exciting Challenges.

The challenges are topical issues, where the course teams interacts with the students.

Each challenge will receive a feedback video.

All the information provided in this video can be found in the course syllabus.

On behalf of the entire Photovoltaic Materials and Devices team here at the Delft University of Technology,

I am very excited to provide this course for you.

I hope you are ready to go jump right in and become an expert on the physics and engineering

of photovoltaic energy conversion.

For more infomation >> PV1x_2017_1.0_Course_Introduction-video - Duration: 9:33.

-------------------------------------------

The Most Satisfying Video to Watch before Sleep | Oddly Satisfying Videos Compilation 2019 - Duration: 10:41.

Oddly Satisfying Video to Help You Fall Asleep

For more infomation >> The Most Satisfying Video to Watch before Sleep | Oddly Satisfying Videos Compilation 2019 - Duration: 10:41.

-------------------------------------------

Özgür Can Çoban - Yaralandım (Official Video) - Duration: 5:09.

For more infomation >> Özgür Can Çoban - Yaralandım (Official Video) - Duration: 5:09.

-------------------------------------------

Utku Uzun & Ali Usta - ÇOCUK ADAM (Official Video) - Duration: 2:40.

For more infomation >> Utku Uzun & Ali Usta - ÇOCUK ADAM (Official Video) - Duration: 2:40.

-------------------------------------------

Act on Mass - Launch Video - Duration: 2:19.

After two years of Trump, good thing we live in a liberal utopia like Massachusetts, right?

Actually, we failed to block ICE from attacking immigrants and setting up a Muslim registry,

failed to include teaching consent in sex-ed, and failed to roll back Jim Crow-era voting

laws and join 15 other states with same-day voter registration.

Even though most voters support increased funding for public education and healthcare,

our state has been SLASHing funding.

And here's why: Leadership in the House of Representatives is conservative.

The Speaker of the House, Bob DeLeo, was against gay marriage, is pro-life, and supports the

death penalty.

Yet enough State Reps, people who are elected to represent you, support Bob DeLeo.

Because he controls our state.

Here's how it works.

For a bill to become a law, it first gets assigned to a committee, basically a group

of State Reps who debate and modify bills before all the State Reps vote on them.

However, unlike Capitol Hill, we don't know how committees vote on Beacon Hill.

Despite enormous public support for a bill, the committee chair can delay voting until

the session is over, and kill the bill with no explanation.

But who gets to be committee chair?

Whoever Bob DeLeo wants it to be.

This is why committee chairs can only do what Bob wants.

Because if he's not happy, he can instantly fire them from being committee chair.

This means losing up to half their stipend, their staff, and that fancy office.

But it doesn't have to be this way, because there's something that scares State Reps more

than Bob DeLeo, it's YOU, well-informed, outraged voters unwilling to accept the status quo.

Up until now, apathy and misinformation have been effective at keeping us silent.

It's why Massachusetts ranks in last place for competitive elections.

Act on Mass tracks meaningful bills that are greatly needed in our communities.

We track which committee they are in, which State Reps support the bills, and which don't.

And how you can contact your State Rep.

There will be phonebanks every week calling voters so that they know what's happening

to issues that are most important to them.

This is our chance to hold our State Reps accountable.

If we all come together to demand our State Reps take action, they will...

At least, if they expect to get re-elected.

Click on the link below to learn more and see an event near you.

www.ActOnMass.org

For more infomation >> Act on Mass - Launch Video - Duration: 2:19.

-------------------------------------------

NHLBI ICTR Short Video #3: Modeling Efficacy & Safety with Utility Functions - Duration: 14:39.

[MUSIC PLAYING]

Hi.

My name's Scott Berry.

I'm a biostatistician at Berry consultants.

And I'm going to walk through examples

of utility functions, modeling efficacy and safety, efficacy

and tolerability, using utility functions.

I'll show you several examples of these and good practices.

So the first example I'm going to talk about

is a trial in stroke.

The endpoint at 90 days in this example

is the modified Rankin score.

You can see, on the table on the right, the seven levels

of the modified Rankin score.

It ranges from level zero, which is no symptoms, perfect status,

neurological status, down to level six, which is dead.

And you can see that the levels vary, one, two, three, four,

five, and six.

An interesting thing about this is

the number does not classify the relative difference

in the states.

It's just a classifier.

You could label these A through G as the label for the level.

The question comes up about how do we analyze this?

What is the statistical analysis of this particular endpoint?

And there are a number of different ways to analyze this.

We had a particular trial where this was the end point.

And the comment was made that let's analyze it

where we lump in zero, one, and two as one

particular dichotomous category, and then three, four, five,

six is separate.

And so we dichotomize it.

The graph on the right shows you the relative utility

of those seven states.

It's equal to be zero, one, and two,

and equal to be three, four, five, and six.

And the comment was made is let's analyze it mathematically

this way.

We separate them mathematically, and then

when the trial is over, we'll think about clinical importance

in the trial.

This-- I didn't like this.

I felt like this was bad science.

It separates the statistical analysis from clinical benefit.

So I feel like this is a statistical mathematical test

that doesn't tie in well to the design

to the clinical question.

As clinical trials are becoming more innovative,

we can't do this that we wait till the end of the trial

to address the important issues.

Adaptive trials, response adaptive randomization,

enrichment--

I'll show you some examples of these--

you need to be able to classify that upfront and prospectively

so that the design does smart things during the course of it.

So we need to address these questions upfront.

Utility functions are very nice ways to do that.

So here is an example trial.

It's called the DAWN trial.

It was a trial in a clot clearing

device for post stroke.

Now, this is approved by the FDA within eight hours,

but this was a trial looking at subjects beyond that period.

It actually enrolled between six and 24 hours.

And clinically, it looked for a mismatch.

The primary endpoint in the trial

is 90 day modified Rankin score.

We propose the primary analysis to be a utility weighting

of those seven outcomes, in part, because

of the complexity of the design itself.

The design had enrichment steps to it,

where we might stop enrolling certain types of strokes

if the device was not beneficial for that.

This is an adaptive enrichment step.

There was also early stopping for futility

in success in the trial.

These are prospectively defined rules

that we need to have set up so we can't do this aspect of, oh,

let's do a mathematical test and think hard about it afterwards.

We need to incorporate that prospectively upfront.

So in this trial, we created a utility function.

So what is a utility function?

A utility function is a mapping from a set of parameters.

So in this example, it could be the probability

of these seven outcomes maps it to the real line.

What is nice about this is it maps

what could be multiple end points into a single measure.

This utility measure on the real line

doesn't have units it's not in terms

of dollars or anything else.

It's a relative value of doses or arms

that we can directly compare on that measure of utility.

So let's look at that example of the DAWN trial.

On the graph, on your right side over here,

you'll see the dichotomous utility

is the red, where we value zero, one, and two the same.

In this one, the blue dots represent the utility score

that was used of each particular outcome, the utility

weighted MRS, and you can read about it in the citation here.

You can see the value of these within the zero, one, and two

have monotonically decreasing states,

but a two has different utility than a one.

It's a worse clinical state, and one is worse than zero.

And the height of these is relatively important.

You can see the drops from zero, one, and two

are relatively smaller than the drop from three to four

or four to five, which clinically

is a more important drop.

This is the utility that was used

in the DAWN trial for analysis.

The DAWN trial ended its stop for success early in the trial.

It never enriched.

So it kept enrolling the large population.

You can see here this figure and table

from the publication of the paper shows you the data.

It shows the relative number of zeros

through sixes on the active device arm and the control arm.

And then the bottom table shows the primary analysis

of the utility weighted MRS. The posterior

probability of superiority of the active arm, which

was greater than 0.99, and the relative difference

on the utility scale.

So you can see what a primary analysis of a utility

looked like.

It was really important in this trial

to incorporate the relative value of these six states

so that we incorporated the efficacy as well as safety.

The potential that we could create more negative outcomes

is directly captured by the end point in the trial.

Now, utility functions.

That's an example where we have these seven

levels of a single modified Rankin score.

Other common uses of utility functions

are to incorporate efficacy and safety or efficacy

and tolerability directly in a single measure.

And again, as we do innovative designs

and we're doing adaptive designs,

we can incorporate all of these into a single endpoint

so the adaptive design does smart adaptations

in the course of the trial.

We're used to doing things like an ED90, which

is, in a way, a poor man's utility function

to say, oh, let's get 90% of the efficacy

and not push it too high and target that,

because we don't know about tolerability and safety.

This is a way of directly capturing

safety and tolerability in your end

point of interest, the utility.

Let's look at an example of this.

So this is an example efficacy curve over a range of doses.

You can see these 10 doses across there,

and the black line on this curve represents the probability

of a positive efficacy outcome.

Let's use an example of a treatment for insomnia,

that this represents the proportion

of patients that have a good night's

sleep after taking the drug.

That's efficacy of the drug.

You can see the yellow line here represents the ED90,

the effective dose 90.

Whether that's the right dose depends entirely

on the other endpoints, the tolerability.

So let's look at potential tolerability curves.

So I've added here three different colored curves

that represent the probability of a tolerability issue that

arises.

In this example, it's easy to think about

that you're overly fatigued the next morning after taking

that particular treatment.

There is a blue curve here, and a green curve here,

and a red curve here representing

the three different levels that could

be for the different doses.

You would love it to be the green curve, because that has

a less chance of tolerability.

In this example, in the green curve,

you would push the dose higher, because you

can get more efficacy with less relative tolerability, where

the light blue curve, you would want

to push the dose down farther.

You still get efficacy, but you get less tolerability.

The ED90 is the same dose regardless.

So we could attach utility functions

to the relative weight of efficacy and tolerability

in this example.

Let's look at several examples of this.

It will use a simple utility function, and as you can see,

where the utility function is, the differential

between the placebo and each dose on efficacy,

subtracting off the added probability

of a tolerability event.

So we're really looking at the difference

between the black curve and each of the tolerability curves

with an equal weighting of efficacy and tolerability.

You could imagine a coefficient of tolerability making

them more important or less important,

or more complicated functions, but this

is a simple function, really, of looking

at the space between the two curves is the relative value

or utility of a dose.

So with that utility function, the graph

on the right side of the slide here

shows the utility of each of the doses using the utility

function we describe in equal weighting of efficacy

and tolerability.

On the bottom of the graph, on the x-axis,

you can see the maximum utility dose

accounting for both of these endpoints,

and in the case of the blue to the green,

you can see, the higher the dose,

the more utility when tolerability is less.

And these are all different than the ED90.

So this is a really nice tool for phase two trials,

for adaptive trials to incorporate efficacy and safety

or tolerability into a single endpoint.

I'll describe an example of a phase two trial that

ran using this technology.

The drug is Trulicity.

Here is a reference you can look at

to look at the design of the paper.

It was a phase two, three seamless trial

that had seven doses in phase two

with response adaptive randomization.

During the course of the trial, every two weeks,

the randomization probabilities for the seven doses

were updated to favor doses that were performing better.

The way we judged better was based

on four different endpoints all combined

together into a single utility function that

measured the therapeutic benefit to a patient

of each particular dose.

An algorithm selected the doses to seamlessly move

to phase three.

It also spanned other phase three trials.

Those doses selected by that utility function

are now the marketed doses of Trulicity.

The example in that trial, the utility functions

are shown on the right of the slide.

You can see how we combine together

the utility of an efficacy endpoint change from baseline

in HBA1C relative to an active comparator.

We combine heart rate, and blood pressure, and weight loss all

into a single utility.

The way we did that in this trial

was by multiplying each of the four components

you see on the right of your slide

into a single utility function.

And you can see, for example, if heart rate or blood pressure

had too high of a difference from placebo,

the relative value of the dose would be near zero,

and we would go back to doses that

were smaller that had high efficacy,

but had a better profile for safety or for weight loss.

This was used in an adaptive trial

in that particular example, and you

can see the power of this within an algorithm.

So to sum up what I've talked about,

utility functions can be incredibly powerful tools

for capturing multiple endpoints into a single function.

Very powerful as we're looking at innovative designs,

adaptations, algorithm based decisions in this.

Now of course, the difficulty in these trials

is creation of the utility function,

eliciting that utility function for your particular scenario.

This can be challenging and to represent

whose utility, are we looking at patients,

are we looking at financial, are we

looking at relative value of doses?

But this is real--

these are really the right scientific questions

we should be addressing to drive the design

into the right space.

Thank you.

[MUSIC PLAYING]

Không có nhận xét nào:

Đăng nhận xét