Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Internship at college

Name: Anonymous 2008-07-01 21:04

I'm at college for the summer not for summer session but for a programming internship. The goal is for me to design an application that connects with a SQL server that is to read in comma delimited data and generate the appropriate tables and columns. The data is also to be represented via a standard x/y coordinate graph. All of this is to work together, with the application grabbing data from the server and exporting it to be graphed (in my plan anyway..)

Tools I'm using are Visual Studio 2008 Express as well as sql server 2005 express. I plan on using crystal reports to handle the graphs. I've taken python and an intro class to c++ back in high school so I think I can handle this. I'm planning on learning C# and some T-SQL to complete said project by september. Is this realistic? any suggestions?

The express site came with some good tutorials that basically sets up the code foundation for the application -> server integration, but I can only see long nights from this point onward

Name: Anonymous 2008-07-03 10:15

>>17
>>18
The first steps include building the basic graph, yes. Afterwards it's supposed to look like a scatterplot of different colors to show the density of the certain variables being read in. Certain ranges would match certain colors, so the entire graph would have different shades of green, yellow, red to represent the data. I don't have the data yet so I dunno what is being accounted for. I'm pretty sure the graph is distance in relation to time, so afterwards the program would have to take any point in the distance axis and take an end point and draw a line of best fit, so to speak, and represent the change based on the scatterplot color (neutral would be normal, higher density would be a bit curvy or zigzag). The user would choose the range. This would also include a report of point A to B, with the difference in time versus actual difference in time (distances, etc are provided in another data file), and so on.

Since the files will probably have thousands of lines of code, wouldn't doing a bulk insert into a SQL server save processing time by pulling out the specific info after being read in once instead of reading in a big file every single time from scratch? Also, the data will be kept on a server nearby, and the exe would have to work on any machine in the comp lab

Also, regarding the scatterplot, think of the heat vision in splinter cell.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List